Most campus debates about “the number” sound familiar: a retention rate appears in a slide, someone cites a different figure from a dashboard, and the meeting drifts into detective work. The core issue is not the tool; it is that our key metrics are not specified in a way that anyone can reproduce, so teams argue the math instead of discussing the decision. This part of the Field Guide is about fixing that. We will make a handful of high-value KPIs unambiguous and repeatable, so your conversations move from “what does this mean” to “what will we do next.” That is the promise of data literacy put to work.
Our approach is practical and grounded in work you may already know. EDUCAUSE frames data literacy as the ability to read, write, and communicate with data in context, which includes understanding sources, methods, and use cases. That framing underscores why a written KPI specification matters: it gives context that travels with the number. The U.S. Department of Education’s Forum Guide to Data Literacy takes a similar stance for education agencies and emphasizes decision-support, quality, and reproducibility, which we mirror here with a short “trust check” you can run before publishing.
The goal is not a bigger dashboard. It is a tighter link between the measure and the move you will make next week. If Part 2 helped you frame your questions into something your data could answer, Part 3 ensures the numbers in your answer are something everyone can trust. Here’s what this post covers and how to use it in your next meeting.
What you will learn
Over the course of this post, we’ll cover how to:
- Write a one-page KPI specification that anyone on your team can reproduce.
- Label percentage points and percent change so your headlines read correctly.
- Pair leading and lagging indicators on purpose.
- Spot confounding effects before they derail a conversation.
The KPI Spec
If Part 2 taught you how to turn a prompt into an answerable question, Part 3 gives you the contract that keeps the answer consistent. A KPI spec is your single source of truth for one metric. It writes down the purpose, population, grain, window, and the math in plain language so anyone can reproduce the number. When the spec travels with your chart, the meeting stays on the decision instead of the formula. Use the Question-to-Query Path to answer today’s question. Use the KPI spec to make sure the same question tomorrow returns the same number.
KPI Spec Fields
- Name: Plain name plus short label you will put on charts.
- Purpose: The decision this KPI supports.
- Population: Inclusions and exclusions in one sentence.
- Grain: Unit per row (student-term, applicant-term, student-course).
- Timeframe: Window (e.g., Fall 2025, 2024 Fiscal Year).
- Formula (show your math): Numerator and denominator in words and symbols.
- Baseline/Target: Comparison point (last term, three-year average, goal).
- Slices: Dimensions you actually plan to compare (program, modality, Pell).
- Source & Freshness: System/view and refresh window.
- Caveats: Data collection realities that move the number (late postings, merges).
- Privacy: Small-n rule (e.g., n < 5) and masking notes.
- 90Owner & Review Cadence: Who maintains it and how often you revisit it.
Cross-reference: When you answer a prompt with the Question-to-Query Path (Part 2), point to any relevant KPI specs instead of re-stating the math. When you change a metric definition, update the spec and add a note to recent Paths that referenced it.
Keep this simple and repeatable.
- Link your Path to the relevant KPI spec.
- Publish the number with a clear footer (source, last refresh, etc.)
- Run a quick trust check before you share (math shown, baseline labeled).
- Then ask a colleague to reproduce the result from the spec alone. If they cannot, refine the spec and try again.
Percentage Points vs Percent Change
Teams get tripped up because these two ideas sound alike but mean different things.
- Percentage points (pp): The absolute difference between two percentages. This is just simple arithmetic and represents the amount of change. Think about doing the math between the two numbers without the % symbol.
- Example: 40% to 50% is +10 percentage points (50% – 40% = +10 pp). This is a positive percentage point increase. 50% to 40% would be a -10 percentage points decrease (40% – 50% = -10 pp).
- Percent change (%): The relative change compared to the starting value and represents the rate of change. Think about how much something grew or shrunk from its original size. This is often what people want to know when they compare percentages.
- Example: 40% to 50% is 25% increase because:
- step 1: 0.50 − 0.40 = 0.10
- step 2: 0.10 ÷ 0.40 = 0.25
- step 3: 0.25 x 100 = 25%
- The inverse would be true for 50% to 40%–a 25% decrease.
- Formula ((new amount – original amount)/original amount) x 100
- Example: 40% to 50% is 25% increase because:
We often use percentage points when we want to provide clear comparisons or highlight direct, simple changes between two numbers. Percent change is great when you want to show relative growth (or decline) or need to compare changes across different scales. It’s also okay to use both, especially in situations where it really matters to help reduce confusion. It simply depends on what data story you want to tell (but wait…I’m getting ahead of myself). Let’s look at a quick example:
Scenario: First-Year Intervention Program
- Initial fall-to-fall student retention rate: 62%
- Fall-to-fall retention rate implementing your amazing program: 77%
Percentage Point Analysis:
- 15 percentage point increase
- Directly shows improvement in student retention
Percent Change Analysis:
- 24.2% improvement in retention
- Highlights the relative growth of the program’s effectiveness
Together, you can present this to your leadership: “By strategically implementing first-year student support initiatives, we’ve elevated our student retention rate by 15 percentage points to 77%, representing a transformative 24.2% growth in student success. It is testament to our commitment to every student’s journey.” And that is a powerful story drawn from the data.
Pair Leading and Lagging Indicators
Lagging indicators tell you how the story ended. Retention, graduation, net tuition, and course pass rates are all outcomes that arrive after decisions are made. They are important for accountability and board reporting, but they are late. Leading indicators move earlier in the term or cycle. They give you a chance to act while there is still time to change the outcome. That is why the pairing matters in higher education. Most campus work runs on terms, deadlines, and milestones. If you only watch lagging measures, you learn what happened. If you watch a small set of well-chosen leading signals, you can help shape what happens.
How to pick a good leading indicator
Use three tests.
- It moves earlier than the outcome you care about.
- It is actionable by a specific team within a finite time period (e.g., two days, a week).
- It is local enough that owners can see their work change the metric.
If it fails any of these three tests, it might be a vanity metric or a proxy you cannot influence, and you should select a different indicator.
Bonus checks (nice to have): timely refresh (daily or weekly), low cost to measure, fair across student groups. These enhance the strength of your leading indicator but aren’t deal breakers if it doesn’t pass.
Let’s look at some quick pairings for a couple of common campus goals.
- Goal: Fall-to-fall retention rates (lagging).
- Leading signals: credit momentum at 15 before the semester begins, early alert closure within 72 hours, tutoring attendance within two weeks of referral, LMS inactivity over seven days.
- Goal: Course success in a gateway math course (lagging).
- Leading signals: attendance in Weeks 1–3, LMS logins and assignment submissions, early quiz scores, tutoring scheduled by week 2.
As you consider your pairings, do not confuse volume with impact. More emails sent is not a leading indicator unless there is evidence that it moves yield or retention. Avoid indicators that arrive as late as the outcome. If your “leading” signal updates monthly and the term ends in five weeks, it is not leading for your purpose. In Part 4, we’ll look at how to use lagging and leading indicators on your dashboards to tell compelling data stories and help people understand the true picture of what’s happening on your campus.
Watch for confounding effects and Simpson’s paradox
Aggregates can hide different stories. A campus-wide rate can look steady while specific programs or student groups fall behind. That is a confounding effect. Simpson’s paradox is the classic case where a trend reverses when you control for a key dimension. The fix is not exotic statistics; it is simple discipline.
- Test a few slices that matter before you summarize so an aggregate “win” does not hide subgroup declines. Program, Pell, residency, modality, first-gen, and credit momentum band are common.
- Keep the denominator consistent across slices. Do not change who “counts” as you compare groups.
- Surface flips in the story. If a subgroup trend runs against the total, show both views and say why.
For a grounding in the concept, many introductory analytics texts and visualization guides walk through Simpson’s paradox with clear examples. Here is one as an example. Or just wander over to the math department and see your favorite math faculty member for a deeper discussion!
How To Get Started
Download our KPI Spec Card and add it as a discussion point for your next data team meeting.
Consider starting by:
- Give the KPI Spec Card a try in at least two different meetings.
- Find at least one collaborator who can help you gain traction at your institution.
You don’t have to do everything at once. Start small and remember that perfect is the enemy of good. Give some of your ideas a try and refine as you go. Building small habits now can offer big returns as the academic year continues.
If this still feels overwhelming or you’d like a little extra support, don’t hesitate to reach out to your Strategic Solutions Manager. Our Client Experience Team (CET) is always here to partner with you and help bring your reporting and data analytics goals to life.
Where the Field Guide Goes Next
This is the third post in our six-part series on Data Literacy. Building metrics that matter consistently is one step on the path to building trust in your data so that leaders feel best equipped to use it regularly in their decision-making process. While each installment builds on the last, you can use Part 3 on its own and see immediate results.
Join us in two weeks for Part 4: Telling Data Stories!

0 Comments
0 Comments