Once you have your sequences built or rebuilt, you’re now into the day-to-day maintenance of your outbound process.
One thing I’ve seen literally hundreds of sales teams mess up, even at big, established, successful companies—maybe even especially at big established companies—is optimizing their sequences.
This is where all the work you’ve done to date comes together, where all the data you’re collecting from your sequences and your targeting framework can be used to continually refine what you’re doing and inform your strategy going forward.
At the heart of it, optimization is about using A/B testing to learn what specific message and offer works best with each of your personas. And believe it or not… most companies mess this up in any number of ways. Starting with not doing A/B testing at all.
This video is all about how to A/B test your way into an optimized outbound process.
Using these methods, along with the other things we’ve covered in this course, I’ve consistently seen over 50% open rates, 25% reply rates, and 10% meeting rates for my sequences—results that are practically unheard of and would blow most outbound teams’ minds. But it’s totally possible! It takes time to get there, but it’s extremely worthwhile to do it.
Statistical significance
The first mistake most outbound teams make when it comes to A/B testing has to do with statistical significance. Statistical significance is just a mathematical way to determine if a test has enough data to be considered valid. But many sales teams have more or less arbitrary rules for how to decide which variation of an email—A or B—wins.
One time I was asked to review a company’s outbound program. This is a successful company with a lot of sales reps and brought in a lot of money. I found that they had a few sequences with a lot of volume—thousands and thousands of sends in each sequence. But I noticed that each A/B variation they were actively using only had about 300-500 deliveries, and some of their previous variations had thousands of deliveries.
When I asked the SDR manager how they were determining success, they said “I wait til there’s 250 or 300 emails sent and see which one is the winner”.
This… is not the way.
But, this is how the majority of outbound sales teams approach optimization! They set a static number—500 deliveries, or 300, or 100—see which of the A/B variations wins, and go from there. But the problem is, this can present you with false positives. And you end up optimizing for a worse result over time—exactly the opposite of what you’d hoped to do!
To optimize properly, you need to calculate statistical significance for each A/B test you do and only determine a winner if you have a 90% or greater confidence interval. Now some software will do this for you. You don't have to do all the math yourself. If all of this is gobbledygook to you… don’t worry. We’ve linked to a few statistical significance calculators in the resource section of this course.
Testing variation
Another common mistake outbound teams make with optimization is they test too few variables at once.
I know, I know! A statistics professor somewhere just had a heart attack, and it’s my fault. What the hell do you mean? Isn’t the only way to A/B test to isolate one variable at a time? Well… yes, kind of. But also no.
In the very beginning, when you’re spinning up a new sequence for one of your personas, or testing a new messaging framework, you actually want to run two wildly different variations of your email with different subject lines, different body copy, and different calls to action.
Now, you may be thinking—isn’t this going to confound our test? How will we know which variable was responsible for the better result? You won’t.
But the thing is… when you’re spinning up a new sequence or breaking into a new market, you don’t know what’s going to work. You just have your hypotheses, and your goal is to validate or invalidate them as fast as possible. Using this approach will help you determine which approach works better much faster. And once that initial A/B test is done, you can start to isolate each variable in subsequent tests and figure out which subject line, body copy, and CTA work best.
This may seem counterintuitive, but if you limit yourself to only testing one variable at a time, you’re also limiting the speed with which you can learn, especially when trying to break into a new market or persona for the first time.
If you’re only optimizing on statistically significant results and you’re testing multiple variables at once for your first A/B test and then isolating variables one at a time, before you know it you’re going to land on a really well-optimized outbound program that’s producing consistent results for you and your team and performing better than you or your leadership team thought possible.
Now, you’re ready for the last step in this process: Automation.