A/B testing is the practice of showing two variants of the same web page to different segments of visitors at the same time and comparing which variant drives more conversions. In an A/B test, the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, VWO provides several ways to troubleshoot your experiment or campaign.

 

Tools for QA:

  • Result page: helps you to view result for each goal and the good news is that it updates the result immediately.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Browser cookie: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all types of goals.

 

Among all of them, I will say the browser cookie is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences and goals QA.

 

Browser cookie:

VWO log all records events that occur as you interact with a page on your browser’s cookie. When you trigger an event in VWO it fires a tracking call and stores that information on the browser’s cookie.

To access on browser cookie tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Cookies tab.
  4. Select the Domain name of your site.
  5. Filter with “_vis_opt_exp_”.
  6. More specific for a campaign filter with “_vis_opt_exp_{CAMPAIGNID}_goal_”.

You can see the list of all events (all types of goals like click, custom, transection etc) that fired. VWO has a specific number of each goal. I have highlighted the events for few goals on the below screenshot.

VWO stores almost all information (that needed for a developer to troubleshoot) in the browser cookies; like experiments, audiences/segments, goals, users, referrers, session etc. You can find the details about VWO cookies from here.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in VWO it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “ping_tpc”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events that fired. I have highlighted the event that has a specific experiment and goal ID on the below screenshot.

Note: If have already bucketed in an experiment and fired few goals you might not see any network calls. So always go to a fresh incognito browser to troubleshoot goals/experiments.

 

As VWO immediately update campaign results so it’s always another good option to check result page. But make sure you are the only visitor at that time who is seeing the experiment.

Goals Troubleshooting/QA In VWO

No comments yet

A/B testing is the practice of showing two variants of the same web page to different segments of visitors at the same time and comparing which variant drives more conversions. In an A/B test, the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, VWO provides several ways to troubleshoot your experiment or campaign.

 

Tools for QA:

  • Result page: helps you to view result for each goal and the good news is that it updates the result immediately.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Browser cookie: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all types of goals.

 

Among all of them, I will say the browser cookie is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences and goals QA.

 

Browser cookie:

VWO log all records events that occur as you interact with a page on your browser’s cookie. When you trigger an event in VWO it fires a tracking call and stores that information on the browser’s cookie.

To access on browser cookie tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Cookies tab.
  4. Select the Domain name of your site.
  5. Filter with “_vis_opt_exp_”.
  6. More specific for a campaign filter with “_vis_opt_exp_{CAMPAIGNID}_goal_”.

You can see the list of all events (all types of goals like click, custom, transection etc) that fired. VWO has a specific number of each goal. I have highlighted the events for few goals on the below screenshot.

VWO stores almost all information (that needed for a developer to troubleshoot) in the browser cookies; like experiments, audiences/segments, goals, users, referrers, session etc. You can find the details about VWO cookies from here.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in VWO it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “ping_tpc”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events that fired. I have highlighted the event that has a specific experiment and goal ID on the below screenshot.

Note: If have already bucketed in an experiment and fired few goals you might not see any network calls. So always go to a fresh incognito browser to troubleshoot goals/experiments.

 

As VWO immediately update campaign results so it’s always another good option to check result page. But make sure you are the only visitor at that time who is seeing the experiment.

A/B testing is a marketing technique that involves comparing two versions of a web page or application to see which performs better. AB test developing within AB Tasty has few parallels with conventional front-end development. Where the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

 

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, AB Tasty provides several ways to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview link: helps you to view variation and move one variation to another, you can also track click goals by enabling “Display Click tracking info’s”.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Local storage: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all click & custom goals.

 

Among all of them, I will say the network tab is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences, goals QA and code execution on page load.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in AB Tasty it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “datacollectAT” or “ariane.abtasty”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Custom goals are work with the same API call as click goals (so it’s also tracked as an event). That’s why we add a text ‘Custom’ before all custom goals to differentiate between click and custom goals.

You can see the list of custom events that fired on the bellow screenshot.

Local storage:

AB tasty log all records events that occur as you interact with a page on your browser Local storage. When you trigger an event in AB Tasty it fires a tracking call and stores that information on browser local storage.

To access on Local storage tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Local storage tab.
  4. Select the Domain name of your site.
  5. Filter with “ABTastyData”.
  6. Click to ABTastyData you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Note: For pageview goal, we have to rely on AB Tasty campaign result page, but the bad news is that it does not update immediately, need to wait 3-4 hours to see the reflections.

We cannot check pageview for AB Tasty by Network console/ Local storage as it’s work differently; It’s tracked the page URL for each of the page and record under each campaign(It has other benefits, like; we can filter the result with any URL without adding it as a pageview goal). AB Tasty manipulates all the goal along with the pageview goals in a certain period and updates that specific campaign results.

Goals Troubleshooting/QA in AB Tasty

No comments yet

A/B testing is a marketing technique that involves comparing two versions of a web page or application to see which performs better. AB test developing within AB Tasty has few parallels with conventional front-end development. Where the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

 

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, AB Tasty provides several ways to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview link: helps you to view variation and move one variation to another, you can also track click goals by enabling “Display Click tracking info’s”.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Local storage: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all click & custom goals.

 

Among all of them, I will say the network tab is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences, goals QA and code execution on page load.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in AB Tasty it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “datacollectAT” or “ariane.abtasty”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Custom goals are work with the same API call as click goals (so it’s also tracked as an event). That’s why we add a text ‘Custom’ before all custom goals to differentiate between click and custom goals.

You can see the list of custom events that fired on the bellow screenshot.

Local storage:

AB tasty log all records events that occur as you interact with a page on your browser Local storage. When you trigger an event in AB Tasty it fires a tracking call and stores that information on browser local storage.

To access on Local storage tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Local storage tab.
  4. Select the Domain name of your site.
  5. Filter with “ABTastyData”.
  6. Click to ABTastyData you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Note: For pageview goal, we have to rely on AB Tasty campaign result page, but the bad news is that it does not update immediately, need to wait 3-4 hours to see the reflections.

We cannot check pageview for AB Tasty by Network console/ Local storage as it’s work differently; It’s tracked the page URL for each of the page and record under each campaign(It has other benefits, like; we can filter the result with any URL without adding it as a pageview goal). AB Tasty manipulates all the goal along with the pageview goals in a certain period and updates that specific campaign results.

In order to make any tools (AB Tasty, Optimizely, VWO, Convert etc) work with your site, you need to insert a snippet (it may have a different name in different tools, like tag, Smartcode etc).

Every tool works hard to ensure that the snippet delivers the best possible experience for visitors to your site, but a few best practices can help ensure optimal site performance. As we are concerned about performance issues or page flickering. We have created this best practice guidance to install the snippet.

Below guidance can improve your testing performance:

 

Snippet placement:

Place the code in the <head> section of your pages so changes are displayed more quickly. Otherwise, a flickering effect may occur: your visitors may see the original page for a fraction of a second before they see the modified page. By calling snippet as high in the source code of your page as possible, our script can apply the changes before the content is displayed.

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.

Note: If jQuery is already included natively on your site, place the snippet directly after the jQuery.

 

Snippet load:

You should not install snippet through tag managers such as Google Tag Manager. By default, all the tag managers load snippet code asynchronously, which may cause page flicker on the test pages. Also, using tag managers may lead to delayed loading of the snippet code, which can cause time-out issues and prevent visitors from becoming part of the test.

  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.

 

Snippet type:

The snippet generally comes in two versions: synchronous and asynchronous. Installing the snippet synchronously helps prevent page flickering. Asynchronous loading eliminates any delay in page load times but greatly increases the chances of flashing. You can learn more about synchronous and asynchronous snippet loading, including the strengths and drawbacks of both load types.

In most cases, most of the tools recommend using the synchronous snippet. If the snippet is placed in your site’s <head> tag, you’ll be sure that your modifications will be applied immediately, before the site loads. This will avoid the flickering effect, and offer the best user experience.

  • Use the synchronous snippet

Note: Few tools recommend using the asynchronous snippet, like VWO. Before using synchronous or asynchronous snippet please have a look on advantage and disadvantage from that specific tool’s documentation.

 

Use preconnect and preload:

Add preconnect and preload tags at the top of the head for faster synchronous loading. We recommend using preconnect to open a connection to the server of specific tools to event endpoint, ahead of time.

  • Use preconnect and preload tags

In the example below, replace “http://dev.visualwebsiteoptimizer.com/lib/2965490.js” with your snippet and “//dev.visualwebsiteoptimizer.com” with the server of your tool.

 

You can find the server address from to preconnect from asking the customer support of specific tools. Bellow adding few server addresses for specific tools that might help you.

Optimizely: //logx.optimizely.com

VWO: //dev.visualwebsiteoptimizer.com

AB Tasty: //ariane.abtasty.com/

Convert: //logs.convertexperiments.com

 

Minimize the number of pages and events:

In a few tools, all pages and events are included in the basic snippet that increases the size of the snippet. To keep the overall snippet size small, avoid creating pages where you don’t expect to run experiments, and archive any unused pages, events and experiments.

  • Minimize the number of pages, events and experiments.

 

Use analytics:

Use an analytics tool to identify traffic that represents your visitors so you can optimize your site for the majority of people who visit. For example, if you find that most of your traffic is from mobile devices, you can target your experiments for mobile users.

  • Use analytics to target your testing

 

Best practice documentation:

Every tool has its own documentation to implement the snippet where they mention the best practices guideline for improving site performance or strengths and drawbacks of various implementation type. Don’t forget to have a look at that because they might have a few more recommendation. Read the documentation carefully and implement it in a way that fulfils your requirements.

  • Read tools specific documentation.

Summary:

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.
  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.
  • Use the synchronous snippet
  • Use preconnect and preload tags
  • Minimize the number of pages, events and experiments.
  • Use analytics to target your testing
  • Read tools specific documentation.

Best practices to implement the snippet of AB testing tools

No comments yet

In order to make any tools (AB Tasty, Optimizely, VWO, Convert etc) work with your site, you need to insert a snippet (it may have a different name in different tools, like tag, Smartcode etc).

Every tool works hard to ensure that the snippet delivers the best possible experience for visitors to your site, but a few best practices can help ensure optimal site performance. As we are concerned about performance issues or page flickering. We have created this best practice guidance to install the snippet.

Below guidance can improve your testing performance:

 

Snippet placement:

Place the code in the <head> section of your pages so changes are displayed more quickly. Otherwise, a flickering effect may occur: your visitors may see the original page for a fraction of a second before they see the modified page. By calling snippet as high in the source code of your page as possible, our script can apply the changes before the content is displayed.

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.

Note: If jQuery is already included natively on your site, place the snippet directly after the jQuery.

 

Snippet load:

You should not install snippet through tag managers such as Google Tag Manager. By default, all the tag managers load snippet code asynchronously, which may cause page flicker on the test pages. Also, using tag managers may lead to delayed loading of the snippet code, which can cause time-out issues and prevent visitors from becoming part of the test.

  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.

 

Snippet type:

The snippet generally comes in two versions: synchronous and asynchronous. Installing the snippet synchronously helps prevent page flickering. Asynchronous loading eliminates any delay in page load times but greatly increases the chances of flashing. You can learn more about synchronous and asynchronous snippet loading, including the strengths and drawbacks of both load types.

In most cases, most of the tools recommend using the synchronous snippet. If the snippet is placed in your site’s <head> tag, you’ll be sure that your modifications will be applied immediately, before the site loads. This will avoid the flickering effect, and offer the best user experience.

  • Use the synchronous snippet

Note: Few tools recommend using the asynchronous snippet, like VWO. Before using synchronous or asynchronous snippet please have a look on advantage and disadvantage from that specific tool’s documentation.

 

Use preconnect and preload:

Add preconnect and preload tags at the top of the head for faster synchronous loading. We recommend using preconnect to open a connection to the server of specific tools to event endpoint, ahead of time.

  • Use preconnect and preload tags

In the example below, replace “http://dev.visualwebsiteoptimizer.com/lib/2965490.js” with your snippet and “//dev.visualwebsiteoptimizer.com” with the server of your tool.

 

You can find the server address from to preconnect from asking the customer support of specific tools. Bellow adding few server addresses for specific tools that might help you.

Optimizely: //logx.optimizely.com

VWO: //dev.visualwebsiteoptimizer.com

AB Tasty: //ariane.abtasty.com/

Convert: //logs.convertexperiments.com

 

Minimize the number of pages and events:

In a few tools, all pages and events are included in the basic snippet that increases the size of the snippet. To keep the overall snippet size small, avoid creating pages where you don’t expect to run experiments, and archive any unused pages, events and experiments.

  • Minimize the number of pages, events and experiments.

 

Use analytics:

Use an analytics tool to identify traffic that represents your visitors so you can optimize your site for the majority of people who visit. For example, if you find that most of your traffic is from mobile devices, you can target your experiments for mobile users.

  • Use analytics to target your testing

 

Best practice documentation:

Every tool has its own documentation to implement the snippet where they mention the best practices guideline for improving site performance or strengths and drawbacks of various implementation type. Don’t forget to have a look at that because they might have a few more recommendation. Read the documentation carefully and implement it in a way that fulfils your requirements.

  • Read tools specific documentation.

Summary:

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.
  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.
  • Use the synchronous snippet
  • Use preconnect and preload tags
  • Minimize the number of pages, events and experiments.
  • Use analytics to target your testing
  • Read tools specific documentation.

AB test development within Optimizely is delightful and seamless. Front end testing development has few similarities with the conventional front-end development work. However, the most important thing is the goals or metrics that decide the result of the test. We need to do proper QA/Troubleshoot to check each goal; that they are working as expected as otherwise, the whole development of the testing work would be meaningless.

We work hard to make a test work in properly, but sometimes technology doesn’t work the way you expect it to. In this article, I have provided a list of five options that Optimizely provides to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview tool: helps you check the experiments and campaigns functionality visual changes for different audiences and events fire details.
  • JavaScript API: helps you verify what live experiments and campaigns are running on a page and which variation you’re bucketed into.
  • Network console: helps you verify whether events in a live experiment or campaign are firing correctly.
  • Optimizely’s cookies and localStorage: helps you to uniquely identify visitors, track their actions, and deliver consistent experiences across page loads.
  • Optimizely log: helps you diagnose more difficult issues in a live experiment or campaign. It tells you about the activated experiment or campaign on the page, qualified audience, applied changes on a page and even events that are fired on each action.

Among all of them, I will say the Optimizely log is your best friend. This log contains all the information that developers need for troubleshooting experiments, segments, audiences, goals and code execution on page load.

I would like to discuss this Optimizely log with a few examples. If your requirements do not serve with this, you can go with other options available in the above links.

Optimizely log:

The Optimizely log allows you to “read Optimizely’s mind” by printing the execution of targeting and activation decisions, variation changes, events, and third-party integrations on a page in your browser’s console.

Use the Optimizely log to investigate all kind of issues, even those issues that you can’t easily diagnose. For goals QA, it is the best weapon in Optimizely.

The log can help you to check:

  • Is an experiment or campaign loading correctly?
  • Is the user qualified for an audience condition?
  • Are the changes you made, applied on the page?
  • Is the page activated on the URL(or specific condition)?
  • Is a click/custom goal fired?

You can check all of this with the Optimizely log. But here; I will show the example for page activation (Pageview goals) and click/custom goal.

You can access the log in two ways:

  1. With a query parameter: Add this query parameter to the URL and reload, boom!!
optimizely_log=info
  1. With the JavaScript API: Paste it to browser console and hit enter.
window.optimizely.push('log');

This will then return something like:

For pageview / click/ custom goal filter the console with “Optly / Track”. I have highlighted on the bellow screenshot for click/pageview/custom goals simultaneously.

For custom segments/attributes filter the console with “Optly / API”. I have highlighted on the below screenshot for custom segments.

Remember; custom segments could only fire once for a session. So, you might need to check in a new private window each time; to see the custom segments are working.

Reference: If you specifically troubleshoot for the audience, page, campaign, traffic allocation & bucketing, variation code and click/custom goals visit here.

Troubleshooting and Goals QA in Optimizely: Part 1

No comments yet

AB test development within Optimizely is delightful and seamless. Front end testing development has few similarities with the conventional front-end development work. However, the most important thing is the goals or metrics that decide the result of the test. We need to do proper QA/Troubleshoot to check each goal; that they are working as expected as otherwise, the whole development of the testing work would be meaningless.

We work hard to make a test work in properly, but sometimes technology doesn’t work the way you expect it to. In this article, I have provided a list of five options that Optimizely provides to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview tool: helps you check the experiments and campaigns functionality visual changes for different audiences and events fire details.
  • JavaScript API: helps you verify what live experiments and campaigns are running on a page and which variation you’re bucketed into.
  • Network console: helps you verify whether events in a live experiment or campaign are firing correctly.
  • Optimizely’s cookies and localStorage: helps you to uniquely identify visitors, track their actions, and deliver consistent experiences across page loads.
  • Optimizely log: helps you diagnose more difficult issues in a live experiment or campaign. It tells you about the activated experiment or campaign on the page, qualified audience, applied changes on a page and even events that are fired on each action.

Among all of them, I will say the Optimizely log is your best friend. This log contains all the information that developers need for troubleshooting experiments, segments, audiences, goals and code execution on page load.

I would like to discuss this Optimizely log with a few examples. If your requirements do not serve with this, you can go with other options available in the above links.

Optimizely log:

The Optimizely log allows you to “read Optimizely’s mind” by printing the execution of targeting and activation decisions, variation changes, events, and third-party integrations on a page in your browser’s console.

Use the Optimizely log to investigate all kind of issues, even those issues that you can’t easily diagnose. For goals QA, it is the best weapon in Optimizely.

The log can help you to check:

  • Is an experiment or campaign loading correctly?
  • Is the user qualified for an audience condition?
  • Are the changes you made, applied on the page?
  • Is the page activated on the URL(or specific condition)?
  • Is a click/custom goal fired?

You can check all of this with the Optimizely log. But here; I will show the example for page activation (Pageview goals) and click/custom goal.

You can access the log in two ways:

  1. With a query parameter: Add this query parameter to the URL and reload, boom!!
optimizely_log=info
  1. With the JavaScript API: Paste it to browser console and hit enter.
window.optimizely.push('log');

This will then return something like:

For pageview / click/ custom goal filter the console with “Optly / Track”. I have highlighted on the bellow screenshot for click/pageview/custom goals simultaneously.

For custom segments/attributes filter the console with “Optly / API”. I have highlighted on the below screenshot for custom segments.

Remember; custom segments could only fire once for a session. So, you might need to check in a new private window each time; to see the custom segments are working.

Reference: If you specifically troubleshoot for the audience, page, campaign, traffic allocation & bucketing, variation code and click/custom goals visit here.

Over the years, I have heard many people saying that their consultants, and in some cases, the developer who is building the test is also doing the QA of the variations. This is potentially hazardous and prone to missing out on finding bugs before the test is live. As a result, the research and data-backed hypothesis test or A/B test could bring back incorrect results. In this article, I have summarized six key reasons as to why you should be doing independent manual QA of all of your variations.

1. Developers and consultants are too close to the test:

Your developer and potentially your consultant are too close to the test that they are building – making it very easy to miss out small but important details if they are in charge of QA

2. Emulators are not the real thing:

“Vege hotdog tastes the same as the real hotdog” Sorry but they are not the same. Your end-users will not use an emulator – they will use the real device and browser. If you are not manually checking them on actual device/browsers, there is a potential that you will miss out on finding issues specific to real browsers

3. Interactions:

If you are not manually checking the variations, you might miss out on issues related to interactions to the page/variations. This could be opening an accordion, clicking on a button or going through the funnel itself.

4. Checking goal firing:

If you are not doing a QA across all browsers manually, you might not be able to test out whether your metrics setup are correct. In a worst-case scenario, you might look at your result after a couple of weeks and notice that your primary metric did not fire properly for some browsers or at all!

5. Breakpoints and changing device display mode:

If you are using emulators, you might miss out any issues related to switching the mode from Portrait to Landscape or vice versa. By QAing the variations on actual mobile/tablet devices, you can easily spot check if the variation is displaying correctly on both modes, but also when the user is switching between the two, the behaviour is as it should.

6. Tests from a Human Perspective:

Manual QA helps to quickly identify when something looks “off.” Automated test scripts don’t pick up these visual issues. When a QA Engineer interacts with a website or software as a user would, they’re able to discover usability issues and user interface glitches. Automated test scripts can’t test for these things.

This is why, here at EchoLogyx, our dedicated QA Engineers always use actual devices and test all variations on the targeted browsers to find issues. They have to be thorough to make sure that no bugs are present in any variations or any development work before we deliver the work. They need to check all possible scenarios and their target is to break the work that our engineers are doing. Essentially our QA team are the gatekeepers to approve whether the test is ready to go live or not. This significantly reduces the risk of getting a bad user experience to the end-users who will be using the site.

6 reasons why independent manual testing is a must for Quality Assurance of A/B Testing

No comments yet

Over the years, I have heard many people saying that their consultants, and in some cases, the developer who is building the test is also doing the QA of the variations. This is potentially hazardous and prone to missing out on finding bugs before the test is live. As a result, the research and data-backed hypothesis test or A/B test could bring back incorrect results. In this article, I have summarized six key reasons as to why you should be doing independent manual QA of all of your variations.

1. Developers and consultants are too close to the test:

Your developer and potentially your consultant are too close to the test that they are building – making it very easy to miss out small but important details if they are in charge of QA

2. Emulators are not the real thing:

“Vege hotdog tastes the same as the real hotdog” Sorry but they are not the same. Your end-users will not use an emulator – they will use the real device and browser. If you are not manually checking them on actual device/browsers, there is a potential that you will miss out on finding issues specific to real browsers

3. Interactions:

If you are not manually checking the variations, you might miss out on issues related to interactions to the page/variations. This could be opening an accordion, clicking on a button or going through the funnel itself.

4. Checking goal firing:

If you are not doing a QA across all browsers manually, you might not be able to test out whether your metrics setup are correct. In a worst-case scenario, you might look at your result after a couple of weeks and notice that your primary metric did not fire properly for some browsers or at all!

5. Breakpoints and changing device display mode:

If you are using emulators, you might miss out any issues related to switching the mode from Portrait to Landscape or vice versa. By QAing the variations on actual mobile/tablet devices, you can easily spot check if the variation is displaying correctly on both modes, but also when the user is switching between the two, the behaviour is as it should.

6. Tests from a Human Perspective:

Manual QA helps to quickly identify when something looks “off.” Automated test scripts don’t pick up these visual issues. When a QA Engineer interacts with a website or software as a user would, they’re able to discover usability issues and user interface glitches. Automated test scripts can’t test for these things.

This is why, here at EchoLogyx, our dedicated QA Engineers always use actual devices and test all variations on the targeted browsers to find issues. They have to be thorough to make sure that no bugs are present in any variations or any development work before we deliver the work. They need to check all possible scenarios and their target is to break the work that our engineers are doing. Essentially our QA team are the gatekeepers to approve whether the test is ready to go live or not. This significantly reduces the risk of getting a bad user experience to the end-users who will be using the site.

Wondering how to increase conversions, but confused where to start? What if your conversion is dropping due to your website, landing pages, or emails that influencing your conversion negatively? To make conversion better you can improve your conversion rate optimization (CRO) and know what exactly wrong, A/B testing can help you to determine.

A/B testing can be involved in testing two or more different versions of original pages like your web page, newsletters ads or landing pages and it will help you to find out which performs better. The result is an effective method that determines the best version driving more conversion. It means A/B testing plays a great role in your (CRO) conversion rate optimization plan. For the long run of your marketing operation, A/B testing process will be the significant factors that will increase your (CRO) conversion rate optimization efforts.

Here we are going to discuss how A/B testing can improve CRO with an impactful A/B testing process and plan. This technique will make your decision easy to invest more in CRO and understand visitor’s real problems and run strategic optimization programs in a better way. The techniques below would make your decision easy and finish the process faster, and get your conversion rate improved.

Develop your test without thinking too much!

The issue we’ve battled with in the past was over-investigating information before running the initial A/B test for a specific site. It was taking us two months by and large before we could demonstrate the consequences of our work. Here and thereafter this long time, we conveyed conversion rate drop – you can envision how baffled the customer was. The same applies to your manager, who puts cash in CRO.

Your objective ought to be to begin a test as quickly as time permits on the grounds that consistently you lose activity you could have tried. Also the open door cost of running less performing variant of your site.

By and large, you can spot serious issues with your site very quickly. Reports in Google Analytics should give you a few thoughts worth testing. Landing pages, Goal Flow, and Funnel Visualization reports are great spots to begin searching for low-hanging natural products.

Utilize guerrilla client testing to recognize usability issues with your site. Demonstrate your site to somebody who isn’t working with it on a consistent schedule (it could be your companion or even an irregular stranger in the closest coffeehouse) and request that he finish the most imperative client situation. Keep in mind, don’t over-investigate in the first place. Attempt these plans to think of a couple of fast issues and begin your test. It’s the best method to utilize your opportunity when you’re beginning a conversion optimization process.

1. Test Your Headlines

Headlines are the main objects a visitors will see on your website. So your landing page ought to have a smart and significant feature that promptly grabs their eye. It should be good enough to influence guests to need to peruse more, and inevitably change over.

2. Test your product page or buying description

Your headlines are to appeal more and influence the traffic to end the conversion to sales in the product description pages. The description page or the landing page should give adequate information to the visitors with minimum efforts and better usability features and how they can get benefitted or why will they buy, it should be engaging and also make them decide faster at the same time with appealing contents.

3. Test Your Design and Layout

Your web design may look stunning, however, it’s likely that it’s excessively diverting. It might keep guests from concentrating on the action you need them to take.

Your site features should be in a perfect with clean and minimal efforts, with an attention on the most essential components – like deals duplicate and CTA. It should be simple for clients to explore, which implies you may also need to create a responsive site to adjust to various devices of different sizes.

Apply the information from your client conduct tests to assemble a diverse plan and design varieties for A/B testing.

You can run a heat map to test where your visitors are performing most and you can redesign your test with better placement of a call to action button and does not matter if it’s a sign-up button or a buy now button.

Track results of you’re A/B Test

Tracking your A/B testing is the most important task to execute a successful conversion rate optimization process. In the beginning, it’s very common that you don’t get to know what to happen, but with each single of tracking, many ideas and lessons are generated and explored to make CRO better.

The reports of the A/B testing can make you quickly go back to where you started and check the current status and enable you to optimize even more with an impactful decision. Without a report, you may end up juggling things and features and may get into confusion with your goals and objectives you have set before the A/B testing process.

It’s a case of an A/B test take note of that we use to monitor conversion optimizations actions. The one above portrays a trial of expelling superfluous components from item page for activity originating from one of our e-commerce sites.

It’s also a priceless resource for have when you need to add someone else to the conversion optimizations teams. You basically allow the new partner access to the spreadsheet or wherever you’re keeping your record, and they can in a flash get up to speed on the authentic testing and results.

An ideal A/B testing records should include: 

  • Domain names, number of test, dates, and code
  • Test goals
  • Results
  • Screenshots of the controls and variables
  • Test hypothesis
  • Summary and achievement of each A/B testing made

We also recommend creating spreadsheets where you can list what you have learned from the test. It’s amazing assets to utilize and experiences can help for the future test, also it will help in making an effective team of expertise.

How A/B testing can improve your CRO?

No comments yet

Wondering how to increase conversions, but confused where to start? What if your conversion is dropping due to your website, landing pages, or emails that influencing your conversion negatively? To make conversion better you can improve your conversion rate optimization (CRO) and know what exactly wrong, A/B testing can help you to determine.

A/B testing can be involved in testing two or more different versions of original pages like your web page, newsletters ads or landing pages and it will help you to find out which performs better. The result is an effective method that determines the best version driving more conversion. It means A/B testing plays a great role in your (CRO) conversion rate optimization plan. For the long run of your marketing operation, A/B testing process will be the significant factors that will increase your (CRO) conversion rate optimization efforts.

Here we are going to discuss how A/B testing can improve CRO with an impactful A/B testing process and plan. This technique will make your decision easy to invest more in CRO and understand visitor’s real problems and run strategic optimization programs in a better way. The techniques below would make your decision easy and finish the process faster, and get your conversion rate improved.

Develop your test without thinking too much!

The issue we’ve battled with in the past was over-investigating information before running the initial A/B test for a specific site. It was taking us two months by and large before we could demonstrate the consequences of our work. Here and thereafter this long time, we conveyed conversion rate drop – you can envision how baffled the customer was. The same applies to your manager, who puts cash in CRO.

Your objective ought to be to begin a test as quickly as time permits on the grounds that consistently you lose activity you could have tried. Also the open door cost of running less performing variant of your site.

By and large, you can spot serious issues with your site very quickly. Reports in Google Analytics should give you a few thoughts worth testing. Landing pages, Goal Flow, and Funnel Visualization reports are great spots to begin searching for low-hanging natural products.

Utilize guerrilla client testing to recognize usability issues with your site. Demonstrate your site to somebody who isn’t working with it on a consistent schedule (it could be your companion or even an irregular stranger in the closest coffeehouse) and request that he finish the most imperative client situation. Keep in mind, don’t over-investigate in the first place. Attempt these plans to think of a couple of fast issues and begin your test. It’s the best method to utilize your opportunity when you’re beginning a conversion optimization process.

1. Test Your Headlines

Headlines are the main objects a visitors will see on your website. So your landing page ought to have a smart and significant feature that promptly grabs their eye. It should be good enough to influence guests to need to peruse more, and inevitably change over.

2. Test your product page or buying description

Your headlines are to appeal more and influence the traffic to end the conversion to sales in the product description pages. The description page or the landing page should give adequate information to the visitors with minimum efforts and better usability features and how they can get benefitted or why will they buy, it should be engaging and also make them decide faster at the same time with appealing contents.

3. Test Your Design and Layout

Your web design may look stunning, however, it’s likely that it’s excessively diverting. It might keep guests from concentrating on the action you need them to take.

Your site features should be in a perfect with clean and minimal efforts, with an attention on the most essential components – like deals duplicate and CTA. It should be simple for clients to explore, which implies you may also need to create a responsive site to adjust to various devices of different sizes.

Apply the information from your client conduct tests to assemble a diverse plan and design varieties for A/B testing.

You can run a heat map to test where your visitors are performing most and you can redesign your test with better placement of a call to action button and does not matter if it’s a sign-up button or a buy now button.

Track results of you’re A/B Test

Tracking your A/B testing is the most important task to execute a successful conversion rate optimization process. In the beginning, it’s very common that you don’t get to know what to happen, but with each single of tracking, many ideas and lessons are generated and explored to make CRO better.

The reports of the A/B testing can make you quickly go back to where you started and check the current status and enable you to optimize even more with an impactful decision. Without a report, you may end up juggling things and features and may get into confusion with your goals and objectives you have set before the A/B testing process.

It’s a case of an A/B test take note of that we use to monitor conversion optimizations actions. The one above portrays a trial of expelling superfluous components from item page for activity originating from one of our e-commerce sites.

It’s also a priceless resource for have when you need to add someone else to the conversion optimizations teams. You basically allow the new partner access to the spreadsheet or wherever you’re keeping your record, and they can in a flash get up to speed on the authentic testing and results.

An ideal A/B testing records should include: 

  • Domain names, number of test, dates, and code
  • Test goals
  • Results
  • Screenshots of the controls and variables
  • Test hypothesis
  • Summary and achievement of each A/B testing made

We also recommend creating spreadsheets where you can list what you have learned from the test. It’s amazing assets to utilize and experiences can help for the future test, also it will help in making an effective team of expertise.

A/B testing, a method that is widely known for its revolutionary use of enhancing customer experiences that would end up with the better conversion. It is one of the best online promotional and marketing strategy tools for any of the business. A/B testing can be used to test anything from website’s heading to email newsletters or even campaign’s running with search engine ads and display advertising. In e-commerce, it can also be used to test the action buttons, checkout experiences, and overall user experiences. The results are the real-time data which can be used to compare the experiences and pick the best results to make the decision easy.

A well planned A/B testing can play a vital role in increasing the efficiency of the marketing efforts. Choosing the most effective method for an increased conversion in a promotion and implementing them makes the marketing efforts more profitable and successful.

The data-driven decisions are used considering many of the variables across the range of communication methods. Following the year 2000, Google has successfully started to use A/B testing to determine growing number of search results and analyses the best way to show them in the listing pages and the rest Is history!

How is A/B Testing used?

A/B testing helps to determine the little differences in each of the customer interactions in a marketing campaign that influence their behavior. The changes can be as small as the changes in the letters of a title in a newsletter or an email, the text structure in a banner or something can be as big as the image itself, call to action buttons or layout of a landing page. It is actually the idea to test two variation with a controlled group of customers and find out the successful one. It can be repeated again and again to improve the contents and improve marketing communication with customers.

The common practice of A/B testing which is proven to be useful:

  • E-Newsletters
  • Email marketing campaigns
  • Websites
  • Apps
  • Internet advertising (banner/PPC/AdWords

How does A/B testing work?

The first A/B testing of any business should come with identifying the key metrics of the business and how that business define its success factors of their marketing campaigns. It can be the number of clicks, sales, sign-ups, average time spends, downloads, and etc. This then is used to set up two or more variables with a controlled number of audiences to go through the variables. It is always recommended not to pick more than one metrics at a time or it’s always challenging to know the one that made the differences. To analyze the results of the best output, it should be tested simultaneously in similar variations and select the most successful variation for final use.

For instance, while testing web pages, the existing version shall be kept as the control and make the second version for the test, and split the traffic with a ratio or even you can divide the traffic equally. This is some of the highly technical tasks, and to ease the process, some of the tools are being used where all the process of creating variations, splitting traffic and measuring the results can be done for free, for example, Google Analytics Content Experiments. There are also widely used tools which are premium and specializes in conversion rate optimization which will run the A/B testing and will make a recommendation that will aid the total marketing efforts, some of the best tools around are Optimizely, Monetate, A/B tasty, Adobe Target, etc.

What to test with A/B testing?

Webpage or Landing Pages

If you are a Magazine your key metrics would be the number of sign-ups you will receive for promotional campaigns. Taking the different version of a promotion where your sign up page call to action button will be optimized and increase sign-ups. For instance, you might get the idea with a new slogan which would make it better from the existing one. Here, the new slogans would be the test (variation A) and the existing would be the control (Variation B). ¬ You can then drive traffic to them equally to both the variations for a period of time. After the test end, we can see the results and which drives more sign up. The best result would then be picked following the campaign or be the same as the earlier versions depending on the results.

Examples of the variables that can be tested

  • Subject titles and subtitles
  • Product descriptions
  • Text (length, style)
  • Offers
  • Price
  • Images
  • Call to action button (text, color, position)
  • Colour schemes
  • Forms (length, question)
  • Page layouts
  • Shipping/Return Policies

While running an A/B test, it is always necessary to keep the sample size statistically relevant, for instance, if you get 2-3 sign-ups in one day, then the result won’t be relevant due to the size, the greater the sample size, the more reliable the result. Also, the result will depend on the performance differences, for example from a campaign if we expect an increase in sign up of 5% from a blue button, we need to analyze the number changes to make the results relevant. When we test the results in tens of thousands, 5.6% is a significant change, but if we take a number that will do a test with 15-20 results, then the result won’t make any sense. For testing in low traffic, scientific results are very hard to achieve, in this cases, the repeat test can be carried to get an insight into the test and get the best conversion rate.

Newsletters and Emails

Newsletters and email marketing are one of the greatest marketing tools that can keep you on the edge over your competitors. Each small visuals and contents can make a huge difference in defining your success, from experimenting with titles which make the receiver to open the email or not, and lots of other titles, call to action buttons, colors, etc. can make the difference in click-through rate. Developing a successful test development plan and executing them can make your campaign lot more success with scientific results.

Insights

A/B testing does not only helps us to pick up the best performing variation for better conversion rate but also the data we get can be used to other areas of marketing efforts and make the decision easier. If we know a blue call to action button working best across other tested areas, we can implement the same things in other places that can make the results more optimized. And when we know what type of newsletters has been successful, we can design another campaign saving decision-making times over promotional materials.

What is A/B testing in Digital Marketing?

No comments yet

A/B testing, a method that is widely known for its revolutionary use of enhancing customer experiences that would end up with the better conversion. It is one of the best online promotional and marketing strategy tools for any of the business. A/B testing can be used to test anything from website’s heading to email newsletters or even campaign’s running with search engine ads and display advertising. In e-commerce, it can also be used to test the action buttons, checkout experiences, and overall user experiences. The results are the real-time data which can be used to compare the experiences and pick the best results to make the decision easy.

A well planned A/B testing can play a vital role in increasing the efficiency of the marketing efforts. Choosing the most effective method for an increased conversion in a promotion and implementing them makes the marketing efforts more profitable and successful.

The data-driven decisions are used considering many of the variables across the range of communication methods. Following the year 2000, Google has successfully started to use A/B testing to determine growing number of search results and analyses the best way to show them in the listing pages and the rest Is history!

How is A/B Testing used?

A/B testing helps to determine the little differences in each of the customer interactions in a marketing campaign that influence their behavior. The changes can be as small as the changes in the letters of a title in a newsletter or an email, the text structure in a banner or something can be as big as the image itself, call to action buttons or layout of a landing page. It is actually the idea to test two variation with a controlled group of customers and find out the successful one. It can be repeated again and again to improve the contents and improve marketing communication with customers.

The common practice of A/B testing which is proven to be useful:

  • E-Newsletters
  • Email marketing campaigns
  • Websites
  • Apps
  • Internet advertising (banner/PPC/AdWords

How does A/B testing work?

The first A/B testing of any business should come with identifying the key metrics of the business and how that business define its success factors of their marketing campaigns. It can be the number of clicks, sales, sign-ups, average time spends, downloads, and etc. This then is used to set up two or more variables with a controlled number of audiences to go through the variables. It is always recommended not to pick more than one metrics at a time or it’s always challenging to know the one that made the differences. To analyze the results of the best output, it should be tested simultaneously in similar variations and select the most successful variation for final use.

For instance, while testing web pages, the existing version shall be kept as the control and make the second version for the test, and split the traffic with a ratio or even you can divide the traffic equally. This is some of the highly technical tasks, and to ease the process, some of the tools are being used where all the process of creating variations, splitting traffic and measuring the results can be done for free, for example, Google Analytics Content Experiments. There are also widely used tools which are premium and specializes in conversion rate optimization which will run the A/B testing and will make a recommendation that will aid the total marketing efforts, some of the best tools around are Optimizely, Monetate, A/B tasty, Adobe Target, etc.

What to test with A/B testing?

Webpage or Landing Pages

If you are a Magazine your key metrics would be the number of sign-ups you will receive for promotional campaigns. Taking the different version of a promotion where your sign up page call to action button will be optimized and increase sign-ups. For instance, you might get the idea with a new slogan which would make it better from the existing one. Here, the new slogans would be the test (variation A) and the existing would be the control (Variation B). ¬ You can then drive traffic to them equally to both the variations for a period of time. After the test end, we can see the results and which drives more sign up. The best result would then be picked following the campaign or be the same as the earlier versions depending on the results.

Examples of the variables that can be tested

  • Subject titles and subtitles
  • Product descriptions
  • Text (length, style)
  • Offers
  • Price
  • Images
  • Call to action button (text, color, position)
  • Colour schemes
  • Forms (length, question)
  • Page layouts
  • Shipping/Return Policies

While running an A/B test, it is always necessary to keep the sample size statistically relevant, for instance, if you get 2-3 sign-ups in one day, then the result won’t be relevant due to the size, the greater the sample size, the more reliable the result. Also, the result will depend on the performance differences, for example from a campaign if we expect an increase in sign up of 5% from a blue button, we need to analyze the number changes to make the results relevant. When we test the results in tens of thousands, 5.6% is a significant change, but if we take a number that will do a test with 15-20 results, then the result won’t make any sense. For testing in low traffic, scientific results are very hard to achieve, in this cases, the repeat test can be carried to get an insight into the test and get the best conversion rate.

Newsletters and Emails

Newsletters and email marketing are one of the greatest marketing tools that can keep you on the edge over your competitors. Each small visuals and contents can make a huge difference in defining your success, from experimenting with titles which make the receiver to open the email or not, and lots of other titles, call to action buttons, colors, etc. can make the difference in click-through rate. Developing a successful test development plan and executing them can make your campaign lot more success with scientific results.

Insights

A/B testing does not only helps us to pick up the best performing variation for better conversion rate but also the data we get can be used to other areas of marketing efforts and make the decision easier. If we know a blue call to action button working best across other tested areas, we can implement the same things in other places that can make the results more optimized. And when we know what type of newsletters has been successful, we can design another campaign saving decision-making times over promotional materials.