It's been an amazing 3 years coming together in this forum to collaborate, innovate, support, and inspire each other about our shared usage of Datorama. While this is not quite a goodbye, we are excited to announce that we are getting a fresh start in our new home within the Salesforce Trailblazer Community. We have a ton of fun new content planned and you may even see the revival of some of our most popular posts from the past few years.
We’ll be keeping this group around for a bit for you to peruse, but as of November 15, we will no longer be allowing new posts or comments. Be sure to join our new group at https://www.salesforce.com/success-datorama to keep the conversation going.
We can’t wait to see you there!
How to Implement A/B Split Testing in Datorama
Part 1 of 2
A couple of notes about this solution: The following solution is applicable to the media buy granularity. If your test is at a different grain (like the campaign or creative grain), you’ll need to adjust the granularity setting within your calculated metrics for the calculations to work as expected. Additionally, the solution assumes you will have a single control media buy (or campaign, creative) per test group, but you may have any number of test media buys per test group.
→ See the A/B Split Test Demo Dash HERE
Ever since your agency adopted Datorama, you’ve been killing it as the Datorama admin. Your client services and sales teams are over the moon about the dashboards you’ve created, and your ad ops team is really loving the automated reporting you’ve put in place to help them with optimization and campaign monitoring. Your team is relying on you to keep the good times rolling, and because Datorama is so much more than your run-of-the-mill reporting tool, the opportunities are virtually limitless! So hop on that blazing saddle as we take you to a new frontier in A/B split testing.
The design of the test is ultimately up to you, and we assume you have some experience with A/B split testing along with the math behind it. As noted in the beginning of this article, the test design works when a single media buy key is assigned as the control per test group, and 1 or more media buys are designated as test media buys. In this particular article, conversion rate will be the success metric which is being tested. The solution discussed is for standard split testing (not multivariate testing). Our test rests on the premise of calculating the Z score, and then using a Z score table to arrive at the correct p-value.
The Maths & Stats behind it all
The formula used for calculating Z score:
Standard Error Formula:
Identifying Test Groups and Test IDs
In order to conduct a proper split test, we’ll need to identify which placements (media buys) belong to the test group, and which belong to the control group. We also assume that you’ll want to run multiple A/B split tests, so for placements that are a part of a test, we’ll need to assign a test ID. Here’s the logic:
Determine if placement is a part of a test (if yes proceed to step 2).
Determine if the placement be placed into the test or control group.
Assign a unique test ID to all placements belong to a particular test.
Here’s how this logic would play out with a sample data set. Note that every media buy is not required to be assigned to a test (test group and test id fields remain null).
Next, you’ll need a way to categorize your various media buys by test group and assign the test IDs. I’ve found the easiest way to accomplish this is with the newly release data classification (under harmonization center and currently in public beta), where a lookup table contains the media buy keys for any placement belonging to a test, along with their respective test group and test ID assignments (see screenshot below).
Sample lookup table:
(Scroll down to the next reply to see part 2 of this article)