A customer sees a Google ad, clicks but does not convert. Three days later, they receive an email. A week later, they see the ad again on Facebook. Finally, they convert. But who generated that conversion? Does Google deserve credit? Email? Facebook? This question, seemingly simple, is one of the most complex in digital marketing. The answers determine ROI, budget allocation, and strategy for every company.
Attribution is the art of assigning conversion credit to one or more marketing channels. Traditional models existed for 15 years: last-click (the last click before conversion gets 100% credit), first-click (the first touchpoint gets 100%), or linear (equal distribution). These models are simple and automated, but largely inaccurate. Last-click ignores the crucial role of earlier impressions. First-click underestimates the effect of final touches.
The post-third-party-cookie era
The end of third-party cookies, effective since January 2024, has radically complicated attribution. Google, Meta, and advertising platforms could no longer track users from one third-party site to another. Apple closed IDFA tracking on mobile. Data-driven attribution models, which used massive historical data, collapsed.
Alternatives are emerging: Google offers GA4 with machine learning models based on each company’s conversion data. Meta uses a hybrid approach combining first-party data and statistical modeling. These solutions offer better precision than legacy models, but remain imperfect: a 2025 IAB study shows 56% of marketers find their attribution models “not reliable enough for major budget decisions”.
Multi-touch vs single-touch
Multi-touch models acknowledge that conversion rarely results from a single touchpoint. They distribute credit according to different logics. The “position-based” model reserves 40% credit to the first and last click, 20% to the middle. The “time decay” model increases weight for recent clicks. These approaches approximate reality but remain arbitrary.
Performing companies test multiple models in parallel. They compare conclusions from last-click, first-click, linear, and machine learning. Divergences reveal where simple models are wrong. An ecommerce business may discover that Google Ads overperforms in first-click but underperforms in last-click: this means Google generates awareness, but another channel (email, direct, social) finalizes the sale.
Methodological challenges
Attribution at very fine granularity (attributing one unique conversion to one click) is an illusion. Modern users live in a multi-device ecosystem: they see an ad on mobile, convert on desktop. Conversion data is fragmented: ecommerce knows every transaction, but SaaS loses leads if users reject cookies.
Exposure bias also plays a major role. A user who sees a Google ad 100 times over a week will convert one day or another simply by temporal coincidence, not necessarily because of the ad. Attribution models do not distinguish causal effect from chance.
Pragmatic approaches
Best practitioners adopt a mixed approach. They use attribution data for tactical decisions (adjusting budgets between Google and Facebook), but also experimentation tests to validate hypotheses. Turning off a channel for two weeks and observing impact on conversions gives a more honest picture than any statistical model.
Another approach: use control variables. Check how users who did not see an ad behave, then compare with exposed users. This allows estimating incremental effect, beyond chance.
Multi-channel attribution will remain a permanent challenge. But those who accept its imprecision, who continuously test, and who cross models with experiments, gain far superior insights to those relying on platform default reports.
