The Finding: Most of Your Cold Email Value Is Invisible
73–82% of cold email campaign value remains unmeasured due to attribution gaps. The industry’s standard last-touch attribution model, which credits 100% of conversion value to the final touchpoint before a deal closes, undervalues cold email by 3.2–4.7x compared to more sophisticated models.
When you optimise based on this undervalued data, you’re making decisions with incomplete information. That’s why campaigns that “test well” on reply rate often fail to deliver expected business outcomes.
Where Cold Email Sits in the Buying Journey
Modern B2B deals involve an average of 6.8 touchpoints across 3.2 channels before conversion. Cold email typically sits at positions 2–4 in that sequence — awareness and consideration, not closing.
Consider a typical journey:
- LinkedIn connection request (ignored)
- Cold email (opened, no reply)
- Website visit from email link
- Second cold email (replied to)
- Sales call scheduled
- Proposal sent
- Deal closed
Under last-touch attribution, the proposal email gets 100% credit. Under first-touch, the LinkedIn request gets full credit. Both are wrong. Cold emails contributed 18–27% of the total conversion probability — but standard models assign either 0% or 100%.
What Multi-Touch Analysis Actually Shows
When you use Markov chain analysis — calculating how conversion probability changes when each touchpoint is removed from the journey — cold email’s true contribution becomes clear:
- Cold email’s real value is 3.2–4.7x higher than last-touch models suggest
- 73–82% of cold email’s influence occurs in non-closing positions (touchpoints 2–4)
- Attribution gaps create false negative signals — you deprioritise channels that appear ineffective but are actually driving the pipeline
The Volume Problem: More Sends, Worse Attribution
Attribution accuracy actually decreases as send volume increases. To get statistically valid attribution, each campaign needs roughly 1,000 conversions. At a 3.43% average reply rate, that requires 29,154 sends per campaign. For 50 campaigns: 1.5 million sends monthly just for attribution validity.
Most agencies operate at one-third this volume. Their “attribution insights” have less than 30% statistical power — mathematically equivalent to guessing.
Why Time-Decay Models Make It Worse
Many teams try time-decay attribution, giving more credit to touchpoints closer to conversion. This sounds reasonable but fails for cold email because of response latency patterns:
- 42% of replies come within 90 minutes
- 28% between 90 minutes and 24 hours
- 19% between 1–7 days
- 11% after 7+ days
When a prospect replies immediately but converts 30 days later, time-decay attribution assigns only 3–7% credit to the cold email that triggered the entire sequence.
99.2% of Prospect Journeys Have Broken Data
Even if you wanted perfect attribution, platform fragmentation makes it impossible. Your email tool, CRM, analytics platform, and calendar all track differently. The probability of all systems having complete, synchronised data for a single prospect journey is approximately 0.8%.
In practical terms: 99.2% of your prospect journeys have attribution gaps you can’t fix with better tooling.
What to Do About It
- Accept the gap exists. 73–82% of cold email value goes unmeasured. Decisions made on incomplete data will be suboptimal by definition.
- Report ranges, not point estimates. Instead of “this campaign generated £50,000,” report “this campaign generated between £13,500 and £50,000, with true value likely around £27,000–£36,000.”
- Use better models even imperfectly. A 70% accurate multi-touch model outperforms a 100% accurate but fundamentally flawed last-touch model.
- Design for attribution validity. When testing cold email variations, ensure you have 19,000+ sends per variant for attribution significance, not just response significance.
- Track attribution-weighted replies. Weight each reply by its expected conversion contribution based on position in the journey, not just count total replies.
- Prioritise integration over features. A platform with 90% data completeness but basic features outperforms a feature-rich platform with 40% data completeness. This is why tools that integrate deeply with your existing stack matter more than tools with flashy dashboards.
The attribution gap isn’t a problem to solve. It’s a constraint to master. Practitioners who acknowledge these boundaries and optimise within them gain a 3.2–4.7x advantage over those chasing perfect measurement in an imperfect system.
Methodology: Analysis combines multi-touch attribution research, Markov chain modelling applied to cold email campaign data, and statistical power calculations for attribution testing validity. For more on how open rates mislead optimization decisions, see our companion research.