A/B Test Action Block
Split contacts into variations to test messages or paths.
What's in this article

Summary
The A/B Test block splits contacts into variations so you can test which message or offer performs better.
When to use
- Test two subject lines or two SMS wordings
- Test two offers (10% off vs $5 off)
- Test different send times (same day vs next day)
How it works
- Assign a percent split to each variation.
- Remaining traffic is held until you pick a winner (within 30 days).
- After you pick a winner, Patch sends the remaining contacts to the winning path.
Required Automation Set Up
A/B Tests work best when you first build a cohort (the exact group you want to test), then split that cohort into variations.
Required block order (common pattern)

-
At Time (Once ONLY scheduled or Right Now)
-
Select Contacts or Select By Events (this adds the cohort into the automation)
-
A/B Test
-
Message blocks (Send SMS / Send Email) and the rest of each path (offers, delays, filters, etc.)
Why this matters: A/B Test can only split contacts who actually enter the automation. The Select blocks are what “load” the cohort.
How to pick a winner (and what to expect)
1) Give it time to run
-
You have 30 days to come back and choose a winner.
-
Winner selection and results won’t be meaningful until enough contacts have flowed through the variations.
2) Decide whether you want a winner at all
You can run A/B tests two ways:
-
Winner-based test: Keep a percentage as “winner allocation” (Remaining Traffic) so you can send the rest to the best performer later.
-
No-winner test: Allocate 100% of contacts to variations and set winner allocation to 0%.
-
This is useful when you only care about learning, not rolling out a “winner” to the remaining group.
-
3) Review results in the right place
To validate the outcome:
-
Open the message blocks under each variation (Send SMS / Send Email)
-
Check the Stats tab for each message (performance by variation)
4) Confirm the “winner” makes sense
When Patch selects a winner using its algorithm:
-
Verify it aligns with what you see in the message Stats (example: higher clicks, replies, conversions, or revenue—depending on your goal)
-
If the “winner” doesn’t match your goal metric, adjust your test design next time (example: test a clearer CTA, change the offer framing, or shorten the message)
Tip: If you’re testing SMS, keep the only difference between variants to one thing (like the first line or the CTA). That makes the winner easier to trust.
Best practices
- Test one change at a time.
- Let the test run until you have enough sends to trust the result.
- Use a simple success metric (clicks, redemptions, bookings).
Tabs in this block
Editor
This is where you configure the block. After you change settings, save the block so the automation uses the updated configuration.
Stats
Stats populate after the block runs for at least one contact. For most blocks, Stats are mainly for quick troubleshooting (example: times triggered and last completed).
Task Log
The Task Log is the best place to troubleshoot a specific run. Each row is one task (usually one contact) that passed through this block.
- Time: when the task ran
- Run Duration: how long it took
- Contact: click the person icon to open the contact profile
- View Payload: click VIEW PAYLOAD to see all data passed through the automation for that task
- Task ID: helpful for internal debugging and support
Warnings & Errors
- Errors must be fixed before the automation can be enabled.
- Warnings do not always block enabling, but they usually mean something is missing or risky.
- If you see an error or warning banner, fix it in the Editor, then re-open the block to confirm it is cleared.