Why Last-Click Attribution Is Broken
The Fundamental Problem
Last-click attribution gives 100% credit to the final touchpoint before purchase. This systematically overvalues: brand search (customers who already decided to buy search your name as the last step), retargeting (shows ads to shoppers who were already coming back), Amazon organic (where many purchases finalize after cross-channel research), and email (final-touch reminder for shoppers already in the purchase funnel).
And it systematically undervalues: awareness advertising (Meta, TikTok, YouTube — the channels that introduce your brand), content marketing (blog posts that educate during research phase), social media (organic posts that build brand affinity), and word-of-mouth/PR (referrals that don’t generate trackable clicks).
The Real-World Impact
Scenario: A brand spending $10K/month on Meta ads and $5K/month on Google Shopping sees this in last-click reporting: Meta ROAS: 1.5x (appears unprofitable). Google Shopping ROAS: 5x (appears highly profitable).
The decision: Cut Meta budget, increase Google.
What actually happens: Google Shopping was capturing demand that Meta created. Without Meta generating awareness and consideration, Google’s search volume drops. Within 6 weeks of cutting Meta, Google Shopping revenue drops 30% — because there are fewer branded searches and fewer shoppers entering the purchase funnel.
This is the attribution trap: cutting the channel that creates demand because it gets no credit from the channel that captures demand.
Attribution Models Explained
Last-Click (Default in Most Platforms)
How it works: 100% credit to the last touchpoint before conversion.
Bias: Overvalues bottom-funnel, conversion-capturing channels.
When it’s acceptable: Single-channel businesses (Amazon-only sellers where all attribution happens within Amazon’s ecosystem).
First-Click
How it works: 100% credit to the first touchpoint that introduced the customer.
Bias: Overvalues top-funnel, awareness channels.
When it’s useful: Understanding which channels drive customer discovery. Not useful for budget allocation alone.
Linear
How it works: Equal credit to every touchpoint in the purchase journey.
Bias: Treats all touches equally — even though some are more influential than others.
When it’s useful: As a sanity check against last-click. If linear attribution tells a very different story than last-click, it indicates your last-click data is misleading.
Time Decay
How it works: More credit to touchpoints closer to the conversion, less to earlier ones.
Bias: Similar to last-click but less extreme. Still undervalues awareness.
When it’s useful: When you want a middle ground between last-click and linear.
Position-Based (U-Shaped)
How it works: 40% to first touch, 40% to last touch, 20% distributed among middle touches.
Bias: Emphasizes discovery and conversion equally.
When it’s useful: Brands that value both customer acquisition (first touch) and conversion (last touch).
Data-Driven Attribution (Google’s Model)
How it works: Uses machine learning to assign credit based on the actual contribution of each touchpoint, calibrated against conversion data.
Bias: Requires significant conversion volume to be statistically valid.
When it’s useful: Accounts with 600+ conversions per month (Google’s minimum threshold). The most accurate platform-specific model available.
The Platform Attribution Problem
Each Platform Claims Full Credit
Amazon reports your Amazon ad as the conversion driver. Meta reports your Meta ad as the conversion driver. Google reports your Google ad as the conversion driver. If a customer interacted with all three before purchasing, the total attributed revenue across all three platforms exceeds actual revenue.
Example:
- Actual revenue: $50,000/month
- Amazon-reported ad revenue: $30,000
- Meta-reported ad revenue: $25,000
- Google-reported ad revenue: $15,000
- Total platform-reported revenue: $70,000 (40% higher than actual)
The $20,000 discrepancy is double-counting: multiple platforms claiming credit for the same conversions. This is unavoidable because each platform measures within its own ecosystem.
Amazon’s Attribution Specifics
Amazon uses a 7-day click attribution window for Sponsored Products/Brands and a 14-day window for Sponsored Display/DSP. Any purchase within that window after an ad click is attributed to the ad — even if the shopper would have purchased organically.
The implication: Amazon ACoS includes some organic sales that happened to occur within the attribution window. Your “true” ACoS (for genuinely ad-driven sales) is higher than reported. TACoS (ad spend ÷ total revenue) partially corrects this by measuring ad spend against all revenue.
Meta’s Attribution Post-iOS 14.5
iOS privacy changes reduced Meta’s ability to track conversions accurately. Meta now uses: a 7-day click / 1-day view default attribution window (reduced from 28-day click / 1-day view), modeled conversions for events it can’t directly observe, and Aggregated Event Measurement for iOS users.
The implication: Meta under-reports conversions compared to pre-iOS reality. Brands that rely on Meta’s reported ROAS are likely undervaluing Meta’s actual contribution. This is why blended ROAS (total revenue ÷ total ad spend) is essential — it captures Meta’s contribution regardless of platform-level tracking limitations.
How to Measure Marketing Effectively in 2026
Method 1: Blended ROAS / Marketing Efficiency Ratio (MER)
Formula: MER = Total Revenue ÷ Total Marketing Spend
What it captures: The overall efficiency of your marketing program, including cross-channel effects that platform-specific attribution misses.
How to use it: Track MER monthly. If MER is stable or improving while you increase spend on a specific channel, that channel is contributing positively — even if its platform-reported ROAS looks mediocre.
Benchmark: 4-8x MER is healthy for most e-commerce brands. Below 3x suggests overspending or efficiency problems. Above 10x suggests under-investing in growth.
Method 2: Incrementality Testing
The gold standard: measure the causal impact of a channel by turning it off and observing the revenue change.
Geo-based testing: Run Meta ads in 5 states, pause in 5 comparable states. Measure the revenue difference between the two groups. The difference is Meta’s true incremental contribution.
Platform holdout testing: Pause a specific channel for 2-4 weeks. Compare revenue to a baseline period. The revenue drop (or lack thereof) reveals the channel’s true impact.
Pros: Most accurate method available.
Cons: Requires sufficient scale (meaningful volume in each test group), patience (2-4 weeks minimum), and willingness to sacrifice some revenue during the test period.
Method 3: Media Mix Modeling (MMM)
Statistical models that analyze the relationship between marketing spend by channel and total revenue over time. MMM uses historical data to estimate each channel’s contribution, accounting for: seasonality, competitive activity, economic conditions, and cross-channel effects.
Pros: Doesn’t rely on user-level tracking (privacy-safe). Captures offline and cross-channel effects.
Cons: Requires 2+ years of historical data for accuracy. Best suited for brands spending $50K+/month on marketing. Requires statistical expertise or specialized tools (Robyn, Meridian).
Method 4: Customer Surveys (“How Did You Hear About Us?”)
Simple but surprisingly useful. Ask new customers: “How did you first hear about our brand?” at checkout or in the post-purchase email. While self-reported data is imperfect (customers forget, misattribute, or pick the first option), it provides directional signal about awareness channels that analytics can’t track: word of mouth, podcast mentions, influencer recommendations, and organic social media discovery.
The Practical Attribution Framework for E-Commerce
For Brands Spending Under $10K/month on Marketing
Use blended ROAS (MER) as your primary metric. Don’t over-invest in complex attribution modeling. Track: total revenue and total marketing spend monthly. If MER stays above 4x while you’re growing, your marketing mix is working. Adjust channels based on directional signals: if pausing Meta for a week causes a revenue dip, Meta is contributing more than its platform-reported ROAS suggests.
For Brands Spending $10K-$50K/month
Use blended ROAS + periodic incrementality tests. Track MER monthly. Run one incrementality test per quarter on your largest discretionary channel (usually Meta or Google). Use the results to calibrate your understanding of each channel’s true contribution. Adjust budget allocation based on incrementality results, not platform-reported ROAS.
For Brands Spending $50K+/month
Use blended ROAS + incrementality testing + media mix modeling. At this spend level, the stakes are high enough to justify sophisticated measurement. MMM provides ongoing channel-level contribution estimates. Incrementality tests validate the MMM outputs. Blended ROAS provides the real-time health check.
Channel-Specific Attribution Tips
Amazon
Use TACoS (not ACoS) as the primary health metric. TACoS captures the interaction between ad-driven and organic sales. A declining TACoS with growing revenue means your advertising is building organic value — even if ACoS alone looks static. TACoS deep dive →
Meta
Don’t trust Meta’s reported ROAS in isolation. It under-reports due to iOS tracking limitations. Instead: track new customer volume from Meta (are you acquiring new customers?), monitor brand search volume on Google and Amazon after Meta spend changes (Meta drives branded search), and use Meta’s Conversion API (server-side tracking) for more accurate conversion data.
Google’s Data-Driven Attribution model is the best platform-specific model available — use it if you have 600+ monthly conversions. For Google Shopping specifically, track not just ROAS but also impression share (are you visible for target keywords?) and new customer percentage.
Email attribution is usually last-click by default (the email was the last thing they clicked before buying). This overvalues email because many email-attributed purchases would have happened anyway — the customer was already in your purchase funnel. Track email’s incremental impact by: monitoring revenue on days you don’t send emails vs days you do, and A/B testing email sends against no-send control groups for specific segments.
Frequently Asked Questions
What’s the single best attribution model?
There isn’t one. Every model has biases. The best approach is using multiple methods: platform-specific reporting for campaign-level optimization (which keywords, which ads to scale), blended ROAS for overall marketing health, and incrementality testing for strategic budget allocation decisions.
How do I attribute Amazon sales that were influenced by Meta ads?
You can’t with platform-level data alone. Amazon doesn’t tell you which purchasers previously saw your Meta ads. The practical solution: measure indirectly. When you increase Meta spend, does Amazon branded search volume increase? Does Amazon revenue increase? If yes, Meta is driving demand that Amazon captures. Blended ROAS accounts for this automatically.
Should I stop using last-click reporting?
No — last-click reporting is still useful for campaign-level optimization (which keywords convert, which ads perform, which audiences respond). But don’t use last-click to make budget allocation decisions across channels. That’s where blended ROAS and incrementality testing provide better answers.
How do I explain attribution complexity to my team or investors?
Use the analogy: “Imagine a basketball team. Last-click attribution gives the assist credit to the player who made the final pass — ignoring the 4 other passes that set up the play. That’s why we track the team’s total points (blended ROAS) alongside individual player stats (platform ROAS).”
Is the attribution problem getting better or worse?
Worse in some ways (iOS privacy, cookie deprecation), better in others (server-side tracking, incrementality testing tools, media mix modeling platforms becoming more accessible). The overall trend: first-party data becomes more valuable, third-party tracking becomes less reliable, and brands need to invest in their own measurement capabilities rather than relying on platform-reported metrics.
Next Steps
Want your marketing measurement evaluated? Our free audit includes an assessment of your current attribution setup and recommendations for improving measurement accuracy. Get your free audit →
Keep reading:
- Amazon ACoS vs TACoS: Which Metric Matters? →
- E-Commerce Unit Economics: The Metrics That Matter →
- Facebook Ads for E-Commerce →
Last Updated: March 2026