TL;DR Nerdy Math About Leadgen Outcome Predictions

At BizXpand, our sequences span 33 days and success is the high bar of a full 1:1 meeting between a qualified prospect and a salesperson. This means there is a lot of time from campaign launch until we can assess outcomes. How long? The TL;DR answer is 200 to 400 prospects that have completely completed the sequence. The long answer is nerdy and fun (at least for me).

 

Meeting or no-meeting is a true or false event. Going back to statistics school, that means a binomial distribution. We also run into sampling theory when we are looking at X-hundred samples of a 100+ sample campaign. For us the prior probability p is assumed to be our average 1/94 is a “success”/SQL.

TL;DR Nerdy Math About Leadgen Outcome Predictions 1

 

Binomial Probability

What is the number of samples (weeks of running the campaign) we need before we can conclude with 90% certainty that the campaign will perform better than 1/200 ? ie. range of usability. And when can we conclude it is likely to above average (1/94)?

TL;DR Nerdy Math About Leadgen Outcome Predictions 3
Probability 1+ SQLs happen in first n samples.
TL;DR Nerdy Math About Leadgen Outcome Predictions 5
Probability 2+ SQLs happen in first n samples.

If our campaign has a 1% (1/100) success rate, we should on average have our first MQL within the first 67 samples. There are two problems with this conclusion when it confronts reality:

First is that confidence intervals only exist when we have at least 2 successful outcomes.  Confidence intervals tell us how different a sample ‘local probability’ is from the actual probability. If we have 0 SQLs , our ‘local average’ after 150 samples is 0/150 = 0.

 

Next, when is a sample defined as ‘completed’?

Theoretically, a success could happen any time during the sequence or even days or weeks after the last email was sent. In reality, we know successes outcomes usually bias toward late in the sequence while negative outcomes bias toward the early sequence steps.  There is a wildly complex probability distribution feeding into this binomial distribution.  From a practical standpoint we usually:

  1. Subtract bounced emails if bounce rate is high. Eg. 30% bounce rate means we need 30% more names to reach learning.
  2. Arbitrarily consider a sample as reached conclusion when the last email in the sequence is sent. (unless it was stopped earlier from something like a negative response). This date is likely at least 1 week too early. In our case that means we need 167 prospects ‘concluded’ to have a 50%/50% change of getting a first meeting.

By the time we have 200 samples ‘concluded’ (last email sent but no waiting) and we have no SQLs, then we can confidently say our success rate is worse than 1/50 (2%).  ie. it is not going to be one of success campaigns. What we can’t say is if it will be average, weaker but useful, or a total disaster.

By the time we have 400 samples concluded with no meetings, we can confidently say the campaign will at best be very, very weak.

Similarly, a single SQL in the first 200 concluded, we can confidently say the campaign will at least be usable.  With exactly 1 SQL in first 400 concluded, we can conclude that the campaign will be below average or worse.

 

Conclusions:

  • I used math (with rationalizations and corrections) to justify what we intuitively knew.
  • Good campaigns usually reveal themselves earlier than bad campaigns. Usually, however,  is not the same as Always — benchmark decision points must be defined at the campaign start and maintained.

 

 

Like this article?

Recent Articles

OUR SERVICE

Outbound B2B Lead Generation in DACH