2022 Data Analysis around Outbound Lead Generation

The data science team at BizXpand has been hard at work over the winter holidays to analyze the cumulative 2022 outbound lead generation campaign data.

As a reminder, BizXpand does pure outbound lead generation, all campaigns stay synced to the same basic recipe, all companies represent tech & IT, all prospects are focused on German speaking and overall Europe.  As a result,  we get some good statistical data.

Yield is slightly better than last year 1 SQL / 76 names (vs 2021 where it was 1/94) and has a nearly identical distribution.  What is driving that:

2022 Data Analysis around Outbound Lead Generation 1
Average is 76 Prospects / SQL but data noise is high.
  • Our technique is not substantially changed between the years– very minor adjustments.
  • While the market is different in those years, it is neither overall worse or better in our opinion.
  • Products & services represented combined with target audiences desired is likely the biggest uncontrolled variable.  We believe this the dominant effect, thus no learning here other than we are doing well.


Lat year’s analysis reported that LinkedIn acceptance rate had near zero correlation with SQL rates, response rates did have a modest correlation to SQL rates, and using the local language boosts your campaigns even if the audience is very comfortable with English.  This year we looked closer at the link between response rates and SQL yield.


If we were to motivate a stronger response rate, would we get more meetings(SQLs)?

Unlikely– for two reasons:

  1.   In the ‘usable range’ (0-250 prospects / SQL) on the left sight of the scatter chart, you can see that there is no correlation.  Weak vs Strong almost no correlation (r< 0.2).  It seems this is only good when comparing Ok with terrible.
  2. Correlation is not causation.  There is no obvious positive mechanism that would justify why increasing negative responses would increase positive accepted meetings.


Can we use this to identify poor performing campaigns early in the process?

Unfortunately no:

  1. Confirming a good campaign early:  If you look at the above scatter chart and imagine that a new campaign has a high response rate (ie. 20%).  Based on the data above, the expectation is that the campaign should perform well.  However, 1) strong campaigns reveal the first SQLs early so we would not need to care about other signals.  and 2) email responses can be long after they are sent so early email response rates are always lower than final values.
  2. Confirming a bad campaign early:If we assume a low response rate (ie. 10%) we see from the above chart that it is equally likely to be a good campaign as it is a bad campaign.  There is nothing to learn from this signal.


When can we predict a campaign will not be successful?

This year, the data science team brushed up on their hard-core statistics to try to answer the question “When can we predict a campaign will not be successful?”   Our experience led us to believe that 200 samples completed in a campaign can give some directional indication and 400 prospects completed is enough to actually make an evaluation.  What does math say?


  1. The effect we are looking for is called ‘confidence intervals’.  Ie. if we ask 10 random people people in Austria whether their phone is Android or Apple and 8 say Android, what does that tell us about the probability that it is actually 50%/50% in the overall population?  CIs tell us upper and lower bounds of the percentage of Android within a defined confidence.  Ie. 70% confidence that Android is 40% to 80%.
  2. Using traditional techniques, the confidence interval is infinity until you get a few positive and a few negative cases.  Some advanced techniques (Willson in the 1990s) makes some better approximations.  Digging into that we can model a few different campaign trajectories:


2022 Data Analysis around Outbound Lead Generation 32022 Data Analysis around Outbound Lead Generation 5


Combining this with the learning of interpreting response rates is that looking at any ‘data’ before at least 400 names have fully completed the sequence is the same as visiting a fortune teller and having them read your tea leaves.  Any intuition someone gets by looking at the ‘early data’ is damaging.  We have since changed our dashboard layout to try to avoid the temptation to distract ourselves and our clients with noise:


There is simply no substitute to 400+ names completing the sequence and no other signals (link clicks, open rates, or reply rate data is only confuse your useful intuition early in the campaign).


More analysis around reaching out in the local language vs English and some discussion about our business adjustments in 2022, I’ll save for another blog.


Recent Articles


Outbound B2B Lead Generation in DACH