When the Metric Lies: Diagnosing Teacher Activation at Prodigy
Company: Prodigy Education
Role: Senior Product Manager, Teacher Experience
Scope: Teacher activation workstream, end-to-end ownership
Timeline: 2022 – 2023
Context
Prodigy is a game-based math and English learning platform used by millions of students across North America. The teacher product sits at the center of the business: teachers who activate — set up a class, add students, and engage with features like assessments and reports — drive student usage, which drives premium membership conversions.
Weekly Active Teachers (WAT) is the north star metric for the teacher experience. I owned the workstream responsible for growing it.
By mid-2023, WAT was in decline. We were heading into back-to-school season — the highest-stakes period of the year — trending below prior years across every segment: new teachers, returning teachers, and resurrecting teachers (those coming back after a period of inactivity).
The pressure was significant. MAT was 20% behind target. MRAS was 12% behind. New bookings were lagging. The business needed answers, and it needed them fast.
Problem
The instinct on the team was to look at what we’d changed in the product. We had shipped Simplified Classroom Management (Manage Classes) as an experiment earlier in the year — a significant redesign of the class setup and rostering experience. It was the most visible recent change, and it became the default suspect.
But I wasn’t convinced the problem was that simple.
Teacher activation is structurally noisy. Registration volume, audience composition, seasonal patterns, email performance, SSO errors, and product changes all interact in ways that make it easy to misattribute cause. An upward trend can mask a worsening audience. A product change can look like the culprit when the real issue is upstream.
Before drawing conclusions, I wanted to understand what was actually happening across the funnel — and whether what we thought we knew was true.
What I Did
Structured the investigation around hypotheses
Rather than reacting to the loudest voice in the room, I led the team through a structured diagnostic. We identified six core hypotheses and assigned owners to investigate each in parallel:
- Are we bringing in lower-quality teachers at the top of the funnel?
- Are we resurrecting fewer returning teachers, and why?
- Are rostering and SSO errors causing activation failures?
- Is email performance contributing to the WAT gap?
- Did the Manage Classes experiment negatively impact engagement?
- Are there external or geo-based market shifts we can’t control?
Each hypothesis got a data owner, a due date, and a clear definition of what “confirmed” or “invalidated” looked like.
Ran and read out experiments across teacher segments
I had been running experiments on both new and returning teacher segments through back-to-school. I owned the readouts:
- Manage Classes — New Teachers: +1.99% activation (not stat sig), +5.98% new user retention rate. But a -15% decline in assessment creation and report views.
- Manage Classes — Returning Teachers: Neutral on WAT. Student activity same or slightly better.
The engagement drop on assessments and reports was the critical finding. Manage Classes was helping with setup but inadvertently pulling teachers away from the core engagement loop — the features most correlated with long-term retention.
Traced the real causes
As the investigation deepened, the picture became clearer — and more uncomfortable. The WAT gap wasn’t primarily a product problem. It was structural:
Audience quality was declining. New teachers registering showed pre-registration signals of lower intent. The percentage registering via mobile had nearly doubled — and our product wasn’t optimized for mobile. There were measurable indicators in email domain quality pointing to the same trend.
The resurrection pool was shrinking. The resurrection rate was consistent year-over-year. But the pool of teachers eligible to resurrect was smaller than the prior year, because fewer had been retained in the first place. We were losing compounding ground.
Rostering and SSO errors were creating silent failures. The “add students” step — our worst-performing activation funnel step — was being undermined by a ~5% error rate across all rostering methods. A single failed student import failed the entire classroom import. Teachers hit a wall they couldn’t diagnose and didn’t come back.
Email infrastructure had gaps. A configuration problem in our placement test email campaign meant only 230k of a targeted 900k emails were sent in a critical window. Teachers who should have been re-engaged weren’t hearing from us at all.
Redirected strategy
The original activation strategy was largely offensive — pitch teachers on product value, improve the onboarding flow, show them what Prodigy can do. The investigation suggested this was the wrong frame.
The right frame was defensive: improve the quality of teachers entering the funnel, fix the infrastructure failures creating silent drop-off, and close the engagement gap for teachers who had activated but weren’t returning.
I documented these findings, aligned stakeholders across engineering, data, and design, and used them to shape the Q4 roadmap — shifting focus toward the engagement loop (reports, assessments, goals-assignment link) rather than further onboarding iteration.
Closed the engagement gap on Manage Classes
Rather than abandoning the Manage Classes work, I identified specific changes to close the engagement gap the experiment had exposed:
- Globalized the Assign button so it was accessible from the Manage Classes page
- Opened the side navigation to surface Dashboard, Assessments, and Reports as persistent destinations
- Made the Dashboard the root landing page for activated teachers with a class
- Repositioned Manage Classes in the nav hierarchy to reduce its prominence for teachers who had already set up
These changes kept the rostering improvements while restoring the engagement pathways that the original design had inadvertently buried.
What Happened
The investigation gave the team a shared, evidence-based understanding of a problem that had previously felt diffuse and hard to own. The structural causes — audience quality, resurrection pool, SSO errors, email gaps — each got owners and remediation plans coming out of it.
The Manage Classes work, with its engagement fixes applied, held. My designer later confirmed it stayed in production and continued to be built on for years after. For a feature that had been paused mid-investigation as a suspect, that’s a meaningful outcome.
The deeper lesson is in the metric itself. WAT looked like a product engagement problem. It was partly an audience quality problem, partly an infrastructure problem, and only partly a product problem. Treating it as purely the latter would have produced more onboarding experiments — the team had already run several, all inconclusive — without addressing the root causes.
What I Learned
Activation metrics are downstream of things you don’t fully control. Audience composition, email deliverability, SSO infrastructure — these aren’t product features, but they move your numbers. A PM who only looks at the product layer will keep optimizing the wrong thing.
An inconclusive experiment is still information. Six months of onboarding experiments that didn’t move the needle wasn’t failure — it was evidence that the lever we were pulling wasn’t the right one. The investigation reframed that whole body of work.
The hardest diagnostic conclusions are the ones that implicate things no single team owns. Audience quality, resurrection pool, email infrastructure — none of these had a clean owner. Surfacing them as root causes required being willing to say “this isn’t a feature problem” in a room that wanted a feature solution.
Defense is strategy. The instinct in growth work is always to pitch harder, ship more, expand the funnel. Sometimes the right answer is to fix the sieve.