Retention Metrics That Matter: What to Track and What to Ignore

Andrew Luxem

Most retention dashboards are cluttered with vanity metrics that feel good but don't predict behavior or revenue.

The dashboard problem

Every CRM team I've worked with has a retention dashboard. Most of them are useless. Not because the data is wrong, but because the metrics on display don't connect to decisions anyone is actually making.

Open rates sit at the top. Click rates below that. Maybe a "retained customers" number that nobody can define consistently. The dashboard exists, it gets reviewed in the Monday meeting, and then nothing changes because the metrics don't tell you what to do next.

At Amazon, the metrics culture was obsessive but specific. Every metric had an owner, a target, and a documented relationship to a customer outcome. That discipline is rare. Most retention dashboards are built bottom-up from what the platform reports by default, not top-down from what the business needs to know.

Here's what actually matters, what doesn't, and how to build a dashboard that drives action.

Repeat purchase rate: the one metric that earns its place

If you track nothing else, track repeat purchase rate. Specifically: the percentage of first-time buyers who make a second purchase within a defined window (30, 60, or 90 days depending on your product cycle).

This metric tells you whether your post-purchase experience is working. It's upstream of LTV. It's predictive. And it's directly influenced by the work CRM teams do every day.

The window matters. A 30-day repeat rate for a consumable product should be higher than for a durable good. Pick a window that reflects your actual replenishment or consideration cycle, not an arbitrary calendar quarter. At Ancestry, the "repeat" behavior was subscription renewal, which meant the retention window was annual. The metric was the same concept, just calibrated to the business model.

Calculate it simply: customers who made purchase 2 in the window, divided by total first-time buyers in the cohort. Track it monthly. Compare cohorts over time.

Cohort retention curves: the shape tells the story

Aggregate retention numbers hide everything interesting. A 40% retention rate means nothing without knowing when the drop-off happens.

Cohort retention curves show you the shape of attrition. Plot each monthly cohort of new customers on the Y axis (percent retained) against time on the X axis (months since acquisition). What you're looking for isn't a single number. It's the curve's shape.

A steep early drop that flattens is normal and healthy. It means you lose casual buyers quickly but retain a core group. A gradual, steady decline is worse: it means even your best customers are slowly leaving. A curve that accelerates downward over time signals a product or experience problem that compounds.

At Stanley Black & Decker, cohort curves on the direct-to-consumer business revealed that customers acquired during promotional periods had fundamentally different retention shapes than those acquired at full price. The aggregate number looked fine. The cohort view showed two completely different customer populations.

This is the kind of insight that changes acquisition strategy, not just retention tactics.

Customer health scores: useful if honest, dangerous if not

Health scores combine multiple behavioral signals into a single indicator of churn risk. In theory, they're powerful. In practice, most implementations are misleading.

The common failure mode: weighting the score toward easy-to-measure actions (email opens, site visits) rather than actions that actually predict retention (product usage depth, support ticket patterns, purchase recency relative to expected cycle).

A health score that's mostly built on email engagement will tell you who reads your emails. It won't tell you who's about to leave.

Build health scores from behavioral signals that have a documented statistical relationship to churn. Test the model against actual churn data before deploying it. Recalibrate quarterly, because customer behavior patterns shift and a model trained on last year's data degrades over time.

At Overstock, we learned this the hard way. Early health score models weighted browse frequency heavily. Turns out, high browse frequency with no purchase was actually a negative signal: those customers were comparison shopping and eventually bought elsewhere. The metric was telling us the opposite of what we assumed.

Revenue per retained customer: the metric finance cares about

Your CFO doesn't care about retention rate in isolation. What matters to the business is revenue per retained customer over time, broken down by cohort.

This metric answers the question: are the customers we're keeping actually worth keeping? A 70% retention rate where retained customers spend less each cycle is a worse outcome than a 50% retention rate where retained customers increase their spend.

Track average revenue per customer by retention period. Month 1-3, month 4-6, month 7-12, year 2+. If revenue per customer grows over time, your retention program is working. If it's flat or declining, you're retaining customers but not deepening the relationship.

Vanity metrics to stop tracking

Overall list size. A large list with low engagement is a liability, not an asset. It inflates your platform costs and degrades deliverability. Stop celebrating list growth and start tracking list quality.

Open rates in isolation. After Apple's Mail Privacy Protection, open rates are unreliable for a significant portion of your audience. They were always a weak proxy for engagement. Use click rates or conversion rates tied to specific actions instead.

"Active customers" without a definition. If your team can't agree on what "active" means (purchased in 90 days? Logged in? Opened an email?), the metric is meaningless. Define it once, document it, and enforce the definition across every report.

Unsubscribe rate per campaign. A 0.3% unsub rate tells you almost nothing. What matters is unsubscribe velocity over time and whether it correlates with specific content types, send frequency changes, or audience segments. A single-send unsub rate is noise.

Building a retention dashboard that works

Start with five metrics. Not fifteen.

Repeat purchase rate by cohort. Your primary leading indicator.

Cohort retention curves. Updated monthly, segmented by acquisition channel and first-purchase category.

Revenue per retained customer by period. The metric that connects retention to financial outcomes.

Customer health score distribution. Not the average score (averages hide everything), but the distribution: what percentage of your base is green, yellow, red?

Churn rate by segment. Not overall churn, but churn broken down by the segments that matter to your business: high-value vs. low-value, subscription vs. transactional, acquisition channel.

Every metric should have an owner, a target, and a documented action plan for when it moves in the wrong direction. If a metric appears on the dashboard but nobody knows what they'd do differently if it changed, remove it.

The takeaway

Retention measurement isn't a data problem. It's a decision-making problem. The metrics that matter are the ones that change what your team does on Monday morning. Everything else is decoration. Build your dashboard around the five metrics above, assign ownership, and ruthlessly cut anything that doesn't connect to a specific action. The goal isn't to know more. It's to know the right things clearly enough to act on them.


Keep Reading

Glossary: Churn Rate | Customer Lifetime Value (CLV)