AboutWorkBlogResourcesContact
← Back to Blog
Analytics·Dec 2025·7 min read

The Only Email Metrics That Actually Matter

Spoiler: open rates aren't one of them. Here's what to track and why, based on $3M+ in email-attributed revenue.

I've wasted too much time in meetings where the first slide is a chart of open rates going up, and everyone nods like a bunch of bobbleheads. Open rates are a joke. They don't predict revenue, they don't measure impact, and since Apple's Mail Privacy Protection, they're not even accurate. After generating over $3M in email-attributed lifetime value across four companies, here's what actually matters. And what you should stop obsessing over.

The Metric Hierarchy

Not all email metrics are created equal — hell, some are downright useless. I organize them in a hierarchy from most to least important, and I report them to stakeholders in that order. The higher the metric, the closer it is to actual business impact.

From most to least important:

  • Revenue per send: the only metric your CFO actually cares about
  • Conversion rate by flow: where is your program actually working?
  • Click-through rate: real engagement, not passive opens
  • Revenue attribution: which emails drive which dollars?
  • List growth vs. churn: are you building or bleeding?
  • Deliverability health: bounces, complaints, inbox placement

Revenue Per Send

This is the single most important metric in lifecycle marketing. It puts money in the bank. Revenue per send tells you how much cash each email generates on average, taking into account list size, engagement quality, and offer relevance.

Calculate it by dividing total email-attributed revenue by total emails sent in a period. Track it weekly, segment it by email type, and trend it over time. At Zendrop, we tracked revenue per send across every automation tier, and it drove every optimization decision. We found that upsell automations had 5x the revenue per send of newsletters. That told us exactly where to invest our time.

Conversion Rate by Flow

Aggregate conversion rates are damn near meaningless. You need to know the conversion rate for each individual flow: welcome series, cart abandonment, upsell, winback. Each flow serves a different purpose and should be measured against its own benchmark.

At Scratch Checkout, we A/B tested the approval-to-loan-esign journey and moved conversion from 55% to 57%. That 2-percentage-point lift directly increased signed loan agreements. In a high-value transaction flow, small conversion improvements can have a huge revenue impact. You only find these opportunities when you measure flow by flow.

A flow with a low conversion rate isn't necessarily broken. It might be serving a nurture purpose. Measure each flow against its specific goal: welcome series measures product adoption, not purchases. Upsell flows measure upgrade rate, not clicks. Define the conversion event before you judge the conversion rate.

Click-Through Rate (Not Click-to-Open Rate)

CTR (total clicks divided by total sends) is the real engagement metric. Click-to-open rate is a derivative that rewards you for low open rates, which is misleading as hell. A 50% CTOR on a 2% open rate means almost nobody is clicking your emails. A 7% CTR means 7% of everyone you sent to took action. That's the number that matters.

At Pickleball.com, we maintain a 7% average CTR across our email program. That drives tens of thousands of monthly clicks to the website, which directly ties to tournament registrations and revenue. The CTR is high because every send is segmented. Tournament players get tournament content, casual players get tips and gear. Relevance drives clicks. Clicks drive revenue.

Revenue Attribution: Solving the Hard Problem

Email attribution is genuinely hard. Most ESPs use last-touch attribution with a 5-day window, which means email gets credit for any purchase within 5 days of a click. This inflates email's contribution and creates false confidence.

How I approach attribution:

  • Last-touch attribution as the baseline. It's what your ESP reports and what stakeholders are used to seeing
  • Holdout testing for true incrementality. Randomly suppress 10% of a segment from an automation and compare conversion rates. This tells you what email actually caused vs. what would have happened anyway.
  • Multi-touch awareness. Understand that email often assists conversions that get attributed to other channels. A customer who reads 5 nurture emails and then Googles your brand gives that conversion to organic search, but email did the work.
  • Window sensitivity: test different attribution windows to understand how inflated your numbers are

The $3M+ in email-attributed LTV across my career uses a mix of last-touch and holdout-validated attribution. I'm transparent about methodology because inflated numbers help nobody, especially not the next marketer who inherits the program and can't replicate the reported results.

List Growth vs. Churn Rate

Your list is a depreciating asset. Contacts go stale, people change email addresses, engagement decays. If you're not growing faster than you're churning, your program is shrinking, even if total list size looks flat.

Track these monthly:

  • Net list growth: new subscribers minus unsubscribes, bounces, and sunset suppressions
  • Engaged list percentage: contacts who clicked in the last 90 days divided by total list
  • Acquisition source quality: which signup sources produce contacts that actually engage?
  • Churn by segment: are you losing free users or paying customers? Very different problems.

What to Stop Tracking

Open rates are unreliable since Apple's Mail Privacy Protection inflated them for iOS users. They were already a weak signal. An open doesn't mean someone read your email, just that their client loaded images. Stop putting open rates on slide decks. If a stakeholder asks about them, redirect to CTR and revenue.

Unsubscribe rate in isolation is also misleading. A healthy email program should have some unsubscribes. It means you're reaching people often enough to trigger a decision. A 0% unsubscribe rate usually means you're not sending enough, not that everyone loves your emails. Track it for deliverability health, not program health.

Build Dashboards, Not Reports

Your metrics should live in a real-time dashboard, not a monthly PDF. I build dashboards that show revenue per send, flow conversion rates, CTR trends, and list health in one view. When something dips, I can diagnose it in minutes instead of waiting for a monthly review.

The best email programs are measured by what they make, not by who opened them. Track revenue, track conversions, track clicks. Let the vanity metrics go.

Want to see the results?

Check out the case studies behind these strategies.