What Three Failed Experiments Taught Us About Teacher Activation

Company: Prodigy Education
Role: Senior Product Manager, Teacher Experience
Scope: Teacher onboarding experimentation, activation workstream
Timeline: 2022 – 2023


Context

Teacher activation at Prodigy is defined as a teacher who sets up a class and adds two or more students. It’s the threshold between someone who signed up and someone who’s actually using the product — and it’s the leading indicator for everything downstream: student engagement, premium conversions, and Weekly Active Teachers.

Getting teachers across that threshold was one of the hardest problems on the team. Registration numbers were healthy. But too many teachers were signing up, seeing the product, and not completing setup. The question was why — and what we could do about it.

The instinct was that teachers needed more confidence. If they understood what Prodigy looked like for their students before committing to setting up a class, they’d be more likely to follow through. That hypothesis shaped the next six months of experimentation.


Problem

New teachers arriving at Prodigy faced an unfamiliar product with a setup flow that asked them to make decisions — grade level, curriculum, class name — before they’d seen anything of what their students would actually experience. The theory was that this uncertainty was the friction. Teachers who didn’t know what they were signing their class up for would hesitate. Teachers who did would activate.

The solution seemed obvious: show them. During the signup flow, surface images, or motion, or guided steps that made the student experience visible before teachers had to commit.

Three experiments. Same underlying hypothesis, progressively refined. None of them produced the win we were looking for — and each one taught us something the previous one couldn’t.


What I Did

Experiment 1: Contextual Onboarding — Static Images

Hypothesis: If new teachers see images of how the game works for their students during signup, they’ll feel confident about what to expect and be more likely to activate.

The design surfaced contextual images of the student game experience at key moments in the onboarding flow — showing teachers what their students would see when they logged in and started playing. The images were chosen to communicate engagement: kids in a game world, answering curriculum-aligned questions, earning rewards.

Result: Activation moved, but not meaningfully. The change wasn’t enough to call a win and the signal wasn’t strong enough to build on. Teachers were seeing the images. They weren’t activating at a meaningfully higher rate.

What we learned: Seeing static images of the student experience didn’t resolve whatever hesitation teachers had. Either the images weren’t convincing enough, or confidence about the student experience wasn’t the actual barrier.


Experiment 2: Contextual Onboarding — Gifs

Hypothesis: Static images show what the game looks like, but motion shows it actually working. If teachers see the student experience in motion — the game running, questions appearing, rewards being earned — the product will feel more real and more compelling.

We replaced the static images with short animated gifs of the student experience. Same placement in the flow, same intent — but now with motion that showed the game functioning rather than frozen.

Result: Again, activation moved marginally but not significantly. The gifs performed similarly to the images. The hypothesis about visual confidence was showing its limits.

What we learned: The format of the content — static vs. animated — wasn’t the variable that mattered. Teachers weren’t failing to activate because the product didn’t look appealing. Something else was blocking them.


Experiment 3: Guided Onboarding

Hypothesis: Showing teachers the product isn’t enough — they need to be walked through the setup steps explicitly. A guided experience that takes teachers through each action, one at a time, will reduce cognitive load and increase completion.

The guided onboarding experience restructured the flow around explicit step-by-step instruction. Rather than presenting information and expecting teachers to know what to do next, it held their hand through each decision point — class creation, adding students, setting curriculum — with clear progress indicators and contextual prompts.

Result: Activation improved slightly more than the previous two experiments, but still fell short of a meaningful win. The guided experience helped at the margins. It didn’t move the metric in the way we needed.

What we learned: The problem wasn’t information. It wasn’t visual confidence. It wasn’t cognitive load in the setup flow. Three experiments had now eliminated the most intuitive explanations for why teachers weren’t activating. That meant the answer was somewhere else.


Reframing the problem

Six months of experimentation had produced a valuable negative result: the onboarding experience was not the primary lever for teacher activation.

This was an uncomfortable conclusion. Onboarding is the obvious place to intervene when new users aren’t converting. It’s visible, controllable, and easy to A/B test. But the data was consistent across three iterations — teachers who went through a better onboarding experience weren’t activating at meaningfully higher rates than those who didn’t.

The reframe was this: if teachers weren’t activating, the problem probably wasn’t what happened during signup. It was what they encountered after — the information architecture of the product itself. If the core experience was confusing, no amount of onboarding preparation would compensate.

That insight fed directly into the IA work that shaped Simplified Classroom Management — a ground-up redesign of how teachers navigated class setup, student rostering, and the core engagement features. The onboarding experiments hadn’t produced a win, but they had produced something more durable: a clear signal about where the real problem lived.


What Happened

None of the three experiments produced a stat-sig activation win. That’s the honest result.

What they produced instead was the evidence base that redirected the team’s focus from onboarding optimization to information architecture. The Simplified Classroom Management work that followed was built on the foundation of what these experiments ruled out — and it shipped.

The learning also contributed to a broader reframe of the activation problem: that engagement, not information, was the pathway to retention. Teachers didn’t need to know more about Prodigy before they activated. They needed to experience it working for their class. That reframe shaped the homepage redesign work that followed, which produced a 210% lift in CTA engagement — the clearest win of the activation workstream.


What I Learned

A well-run experiment that fails is still progress. Three inconclusive results in a row isn’t a sign that the team is spinning its wheels — it’s a sign that the hypothesis space is being systematically eliminated. The goal of experimentation isn’t to win every test. It’s to know more after than before.

The most valuable insight is often a negative one. “It’s not this” is underrated. Knowing that onboarding content — in any format — wasn’t the activation lever freed the team to look in the right place. Without that evidence, we’d have kept optimizing the wrong thing.

Confidence and clarity are different problems. The original hypothesis was about teacher confidence — show them what they’re signing up for and they’ll commit. The real problem was clarity — once teachers were in the product, could they find their way to value? Those sound similar. They require completely different interventions.

Experimentation is research. These three experiments didn’t ship a feature that moved the needle. They produced the insight that shaped the IA work, which shaped the homepage redesign, which did. The return on experimentation isn’t always visible in the experiment itself.