# Synthesis — The Information Bridge

**Narrative axis**: *Information Bridge* — the Value Hub as a translator across information silos. Progress data for a Novakid child exists in at least five disconnected places. The design problem is not "create progress data" (it exists) but "bridge silos so the parent sees what is already known." This framing comes directly from open coding (Theme A of `open-coding-analysis.md`: *Evidence Lives Outside the Dashboard*).

**Method**: this synthesis weaves three evidence layers:
1. **Primary empirical** — the quote corpus (128 verified real parent quotes across 7 regional markets + 15 platforms)
2. **Theoretical** — Self-Determination Theory, Expectancy-Value Theory, Jobs-To-Be-Done, Parental Educational Anxiety Scale, British Council plateau research, behavioural economics, Hofstede cultural dimensions
3. **Generative (clearly labelled)** — 8 LLM-simulated persona interviews. Stimulus excludes Novakid entirely, probes mental models of progress abstractly.

Personas do not count as empirical evidence. They *generate* hypotheses that the corpus and theory validate or discipline. Every load-bearing design claim carries an inline validation recommendation.

---

## 1. Executive summary (one page)

### The problem space

Parents cannot see their child's progress in a Novakid subscription. They know it is happening; they can point to real-world moments as proof; they build their own DIY tracking systems (Notion databases, notebooks, phone recordings) to fill the gap. The product collects vastly more progress data than any parent ever sees. Across 128 verified parent quotes in the expanded corpus, the dashboard is mentioned ~5% of the time — and when mentioned, is most often criticised as broken (the progress button requests audio permission before showing data) or praised only for recorded-lesson playback, which parents use as a DIY verification tool.

The problem is not an absence of progress. It is an *architecture* of information — progress exists in five silos that do not talk to each other:

- **What Novakid's AI knows** — 1,500 tracked micro-skills per child, per lesson
- **What the teacher knows** — lesson-by-lesson observations, real-time judgement
- **What the child experiences** — the subjective experience of learning, including moments of aha and moments of boredom
- **What the world sees** — real-world capability (ordering food, reading signs, speaking with cousins)
- **What the school measures** — external academic benchmarks, grades, teacher comments

The parent sees fragments of each, filtered through a product surface that captures almost none of it.

### The thesis

*The Value Hub is not creating progress data. It is building bridges between silos that already hold the data.*

This framing survives from the original research and is reinforced by the rework. The design problem is translation, connection, and narrative — not measurement. Novakid already measures. The product has not been designed to *make the measurements legible to the person paying for them*.

### The six jobs

Parents hire the Value Hub to perform six distinct jobs (fully specified in `archetypes.md`). These are not people. One parent occupies different jobs at different moments — a parent is a Navigator at onboarding, a Reassurance-Seeker during plateau, an Investor at renewal, an Outsourcer in steady-state.

1. **Reassurance-Seeker** — *"Tell me I'm doing the right thing as a parent"*
2. **Auditor** — *"Show me the evidence so I can judge for myself"*
3. **Investor** — *"Defend the money to me"*
4. **Navigator** — *"Show me the path ahead so I can prepare"*
5. **Co-Pilot** — *"Give me a role I can play without becoming the homework manager"*
6. **Outsourcer** — *"Just work — don't make me manage this"*

### The single biggest design opportunity

The plateau window (months 2–8) is the highest-ROI intervention point in the customer journey. Every interview and cluster in the corpus points to it: parents hit a plateau where measurable production stalls, consider cancellation, and receive zero proactive communication about plateaus being a documented phase of language acquisition (British Council SLA research). The product has a retention intervention available at near-zero cost — a pre-registered plateau-explanation message delivered proactively — that no competitor has shipped.

### The five bridges

The Value Hub architecture is organised as five bridges between the silos. The remainder of this document specifies each.

- **Bridge 1 (AI → Parent)** — translating what the AI tracks into observable parent-visible behaviours
- **Bridge 2 (Teacher → Parent)** — making teacher judgement specific, bidirectional, and reliable
- **Bridge 3 (Child → Parent)** — capturing the child's own signal, which parents already identify as their strongest retention indicator
- **Bridge 4 (Real World → Parent)** — converting real-world moments into structured progress evidence
- **Bridge 5 (School → Parent)** — aligning platform progress with external academic benchmarks that parents actually use

Each bridge maps to specific design surfaces, specific jobs it primarily serves, and specific kill-list rejections (what *not* to do).

---

## 2. The five silos in detail

### Silo 1 — What Novakid's AI knows

Novakid tracks approximately 1,500 micro-skills per child per course, per internal documentation and multiple public statements. This includes lesson-level engagement, per-slide interaction quality, speech recognition outputs, homework completion, vocabulary acquisition events, grammar pattern usage, and pronunciation accuracy. The volume of AI-observed data is large.

**What parents see of this**: essentially nothing. Across 128 verified quotes, only one (Gabriel Bulkin, UK) describes the parent dashboard as an effective monitoring tool — and his review is vague about what he actually sees. The progress-button-requires-audio-permission bug (Oya Canli, Google Play) actively gatekeeps whatever is there behind an irrelevant permission prompt.

**The translation challenge**: AI micro-skills are not parent-legible. "Past-tense irregular-verb accuracy: 67%" is not a sentence a parent can act on. Raw AI output must be translated into observable behavioural statements before it enters the parent layer. The Dinolingo CEFR-for-parents framing (observable behaviours per CEFR sub-level) is a partial template; the Novakid-specific version requires a dedicated design surface — the **AI Progress Narrative** — that compresses weeks of micro-skill data into a three-sentence, child-specific, falsifiable observation.

**Persona signal**: six of eight personas in the v2 interviews spontaneously described wanting a "sentence, not a dashboard" as the hero surface. Elena (Warsaw): *"A paragraph. Written like a friend who teaches languages would write it."* Umut (Istanbul): *"A message from his teacher, or an AI that sounds like his teacher... two, three sentences."* This is not a persona-source claim — the *evidence* is the corpus pattern where specificity beats warmth — but the personas articulate the translation requirement in product-design terms.

**Primary jobs served by Bridge 1**: Reassurance-Seeker (the specific observation is their proof), Auditor (the depth layer underneath the narrative is their verification), Navigator (the roadmap is AI-derived projection).

**Inline validation**: the AI Progress Narrative as hero should be A/B tested against a structured-card layout in a 20-parent pilot over 14 days. Primary metric: 7-day return rate to the hub. Secondary: teacher-side load (is AI-generating narratives sustainable?).

### Silo 2 — What the teacher knows

Teacher observations are Novakid's most underutilised asset. Teachers see the child for 25 minutes twice a week. They catch moments — a self-correction, a new tentative phrase, a confidence shift — that no AI can capture because those moments require interpretation of the child as a person, not just as a language-producing system.

**What parents see of this**: inconsistent. Across the corpus, the same product produces two very different teacher-feedback experiences:

- *"His teacher writes a review right after each lesson so that I know how to support him."* — Pınar (UK, 5★)
- *"In addition to sending daily notes, pictures and videos after each lesson."* — Saleh (5★)

vs.

- *"Notifications to parents are always positive. Even if the lesson went badly, everything is rose-coloured. They avoid giving realistic feedback... As a parent, it's impossible to have a dialogue with the teacher."* — realjaew (TR, Ekşi Sözlük)
- *"They everyday give me feedback, but I can't. I would like to write to the teachers to give them my feedback and I can't do it."* — Carmentxu (ES, 4★)

The inconsistency is structural, not stylistic. The teacher-feedback UX does not force specificity, and the parent cannot reply. Both properties need to change.

**The translation challenge**: teacher feedback must be (a) calibrated honest (strengths *and* areas to reinforce, never uniformly positive), (b) specific (falsifiable claims, not summary adjectives), (c) bidirectional (parent can reply with context, bounded), (d) consistent (templated where it must be, AI-augmented without replacing the teacher's voice, teacher's name always on the artefact).

**Persona signal**: Pilar (Madrid) articulates a register concern specific to the Co-Pilot job: *"a warm, light thread where I can send a message like 'she's nervous about tomorrow's lesson because she had a hard day at school' and the teacher can acknowledge and adjust."* The bidirectional channel's purpose isn't chat-style dialogue — it's *context transfer*, letting the parent inject one sentence of context that sharpens the next lesson.

**Primary jobs served by Bridge 2**: Reassurance-Seeker (teacher-as-trusted-voice), Co-Pilot (the reply channel), Auditor (specificity enables verification).

**Inline validation**: Bridge 2 prototypes should be tested for teacher load before parent-side pilots. If adding "one strength + one area to reinforce + 1-sentence next-week focus" to every post-lesson note pushes teacher per-lesson admin time past ~3 minutes, the mechanism needs AI-augmentation (see Bridge 1 interaction). Teacher NPS must not drop more than 5 points in a 30-day pilot (kill condition).

### Silo 3 — What the child experiences

The child's own experience of learning is Novakid's strongest under-exploited signal. Every persona in the v2 interviews, and a large portion of the real-quote corpus, identifies the *child's own desire to continue* as the parent's #1 retention signal.

**From the corpus**:
- *"Мой сын радуется и хочет заниматься дальше."* [My son is happy and wants to continue.] — Диана Микель, azbukakursov.ru
- *"A year later, studies with pleasure, almost independently."* — ZULYAen, kursfinder.ru
- *"Our children genuinely enjoy each class."* — Diana Cohen, Trustpilot
- *"My daughter's avatar collection keeps her begging for more lessons — it's genius!"* — unnamed mother, review aggregator

**What parents see of this**: almost nothing that the product captures explicitly. Parents infer the child's desire from behaviour at home (does the child ask for the next lesson? does the child resist?). The platform collects engagement telemetry (lesson attendance, Game World activity, star reactions) but does not surface the *child's own experience* to the parent in interpreted form.

**The translation challenge**: the child's experience must be surfaced without turning the child into a performance object. A "my child's favourite lesson this month" card, sourced from per-slide star reactions, is design-legitimate. A "my child's engagement score 82%" surface is not — it instrumentalises the child's experience into a metric parents cannot act on.

**Persona signal**: Umut (Istanbul): *"A smile after class is the best data I have."* Pilar (Madrid): *"If she's laughing, it means she's understanding at the speed of humour, which is faster than the speed of comprehension."* Parents detect the child's experience through proxies. The hub should *surface* those proxies — not replace them with abstract metrics.

**Primary jobs served by Bridge 3**: Reassurance-Seeker (child-enjoys is proof-of-working), Co-Pilot (share-the-moment), Outsourcer (silent child-satisfied signal).

**Inline validation**: before surfacing child-experience data, conduct a brief parent-ethics audit: which signals do parents consent to seeing surfaced about their child? Child-experience data is sensitive. Opt-in by default in markets where child privacy norms are strict (DE, FR).

### Silo 4 — What the real world sees

Real-world capability is where parents *actually* measure progress. Across the corpus and across every v2 persona interview, the strongest proof-type parents cite is a real-world moment: ordering food in English, speaking to foreign cousins, translating for family, being approached by an English child at a beach and holding a conversation.

**From the corpus**:
- *"When we travel abroad, they are able to communicate clearly and express their needs with ease."* — Ozge Gungor Ulug, TR, 5★
- *"He's already fluent and can communicate easily with American friends."* — Mariem, FR, 5★
- *"After six months — the child started speaking short phrases, understanding what I say in English on vacation."* — Avotna, kursfinder.ru, 5★

**From the v2 personas**:
- Elena's Greek menu moment
- Umut's Marmaris café moment
- Giulia's Lake Como frog moment
- Pilar's Christening compliment moment
- Ahmed's London Minecraft weekend
- Sophie's Brittany beach conditional-sentence moment
- Hans's Tirol ski hotel moment

This is the canonical evidence parents hold against their subscription cost. It is anecdotal, scarce, and unstructured — and because it's unstructured, it doesn't compound. A parent with three real-world moments across six months has more conviction than a parent with three hundred AI-tracked skill-points. The ratio of psychological weight is closer to 100:1 than 1:1.

**What the platform does with this**: nothing. The platform has no capture surface for real-world moments. Parents store them in their own phones, in their own notes, in their own family WhatsApp groups.

**The translation challenge**: provide a *lightweight capture affordance* that converts scarce anecdotal moments into structured progress evidence. Parent reports a moment in 10–30 seconds (via push notification, voice note, text, photo). The moment enters the progress record. The AI narrative threads the moment into the weekly message. Other moments accumulate into a year-in-review artefact.

**Persona signal**: every v2 persona was asked about DIY tracking. Every single one described some form of it — Elena's phone note, Umut's video archive, Giulia's spreadsheet, Anastasia's notebook, Hans's project log, Pilar's illustrated diary, Sophie's shared Notes document. *Every single one would stop or simplify their DIY system if the platform captured moments well.* This is one of the clearest product-demand signals in the entire research.

**Primary jobs served by Bridge 4**: Co-Pilot (participation channel), Reassurance-Seeker (moment-as-proof), every parent at holiday-season and family-gathering moments.

**Inline validation**: moment-capture affordance needs A/B on friction calibration. Too-fast capture (one tap) may feel insubstantial; too-long capture (form + photo + description) will be abandoned. Hypothesis: optimal is voice note + optional text refinement, ~30 seconds end-to-end. Test in a 4-week parent pilot.

### Silo 5 — What the school measures

For most markets in Novakid's footprint, school English is the parent's *external validator of choice*. A platform-teacher's positive note is weaker evidence to a parent than the school teacher's comment on the child's report card. Across the corpus:

- *"My daughter has been taking this course for over three years and I must say the improvements are visible, even acknowledged by her school teachers."* — Dorotea, Trustpilot Italy
- *"Ребёнок лучший в классе."* [The child is best in class.] — Наталья, progbasics.ru
- *"Zero problems with English at school, absolutely none. Just one lesson a week with a premium teacher — excellent result."* — Pantyusha5, kursfinder.ru
- *"English teacher at school wrote in her Zeugnis 'spricht sicher in einfachen Sätzen' — speaks confidently in simple sentences."* — Hans (v2 persona)

School validation is the proof-type that closes the renewal debate for a parent who is uncertain. It is also the proof-type the platform has the weakest grip on.

**What parents see of this**: the platform does not connect to school systems. Parents infer alignment themselves — "he reads A2 level and he's in grade 3, I think that's good." The inference is often wrong, and the platform offers no alignment guidance.

**The translation challenge**: map Novakid's curriculum position to known school-English benchmarks per market. For Russia, map to Russian school-grade expectations by year. For France, map to the French school English curriculum pace and to the Baccalauréat track. For Italy, map to Cambridge YLE / KET / PET — the widely-recognised external certificate pathway. For Poland, map to Polish school-English expectations plus CEFR ladder. Each market needs a local alignment artefact.

**Persona signal**: Anastasia (Moscow) articulated this most sharply: *"I want the school benchmark, not the European benchmark. The platform comes from Europe, the framework is European, but my son is going to be tested in a Russian school context."* The CEFR framework is the platform's lingua franca; the school system is the parent's. The Value Hub must speak both.

**Primary jobs served by Bridge 5**: Navigator (external alignment), Investor (external validation as ROI proof), Reassurance-Seeker (school-teacher endorsement as emotional proof).

**Inline validation**: Bridge 5 is a multi-year alignment-data project, not a six-week feature. MVP: manually-curated school-curriculum mappings for top 5 markets (RU, IT, TR, PL, ES) maintained by local pedagogy specialists. Test with 20 parents per market before scaling.

---

## 3. The six jobs across the five bridges

The matrix below shows which bridges primarily serve which jobs. This is a design prioritisation tool — it answers "for this job, which bridge matters most?" and therefore "for this persona's active moment, which surface must work?"

| | Bridge 1 — AI | Bridge 2 — Teacher | Bridge 3 — Child | Bridge 4 — Real World | Bridge 5 — School |
|---|---|---|---|---|---|
| **Reassurance-Seeker** | **Primary** (hero sentence) | **Primary** (teacher voice) | Primary (child's enjoyment) | Supporting (moments as proof) | Primary (external validation) |
| **Auditor** | **Primary** (depth layer, falsifiability) | **Primary** (specificity, bidirectional) | Supporting | Supporting | **Primary** (external verification) |
| **Investor** | Supporting (cost-per-outcome math) | Supporting | Supporting | Supporting (moments as ROI proof) | **Primary** (external academic ROI) |
| **Navigator** | **Primary** (curriculum roadmap) | Supporting | Supporting | Supporting | **Primary** (school-track alignment) |
| **Co-Pilot** | Supporting | **Primary** (bidirectional channel) | **Primary** (share-the-moment) | **Primary** (capture affordance) | Supporting |
| **Outsourcer** | Supporting (anomaly alerts only) | Supporting (minimal digest) | **Primary** (silent-satisfaction signal) | Supporting | Supporting |

**Readings of the matrix**:

- Every bridge serves at least two jobs primarily. None is niche.
- The Co-Pilot and Reassurance-Seeker jobs are the most bridge-dependent — they rely on four or five bridges each. Designs that serve these jobs must weave multiple bridges into a coherent experience.
- The Outsourcer is the least bridge-demanding — primarily served by Bridge 3 (child's silent satisfaction) with anomaly alerts from Bridge 1. Over-engineering other bridges for this job actively harms it (over-engagement = churn signal).
- Bridge 5 (school alignment) disproportionately serves the decision-maker jobs — Navigator, Investor, Auditor. These are the parents at renewal. Bridge 5 is therefore a retention bridge, not a daily-engagement bridge.

---

## 4. Five tensions the Value Hub must navigate

The bridges alone are not enough. Each bridge must navigate tensions that emerge from serving six jobs whose needs sometimes conflict.

### Tension 1 — Honesty vs. Anxiety

Auditors demand honesty including specific weaknesses; Reassurance-Seekers can be destabilised by the same honesty delivered without care.

**Resolution**: *specific, bounded, warmly-phrased honesty.* Name one strength and one area to reinforce per week — not a laundry list. Use affirmative language even when pointing to weakness ("Mia is working through past-tense irregular verbs, which is the hardest step at her age") rather than critical language ("Mia is struggling with past tense"). Evidence: the corpus shows uniformly positive feedback (realjaew) *also* causes distrust — so softening to uniform positivity is *not* the resolution. The resolution is calibrated specificity.

### Tension 2 — Control vs. Convenience

Co-Pilots want a role; Outsourcers want to be left alone. Some parents want agency in the child's learning; others have deliberately delegated.

**Resolution**: *opt-in participation surfaces.* Participation affordances (moment capture, teacher reply channel, weekly prompts) exist but are never required. No gamification of parent engagement. No "you haven't visited in 14 days" nudges. The Outsourcer receives anomaly alerts only; the Co-Pilot receives optional weekly prompts. Each parent's pattern is respected.

### Tension 3 — Data vs. Story

Auditors and Investors want numbers; Reassurance-Seekers and Co-Pilots want sentences. Most real parents want both but in different proportions.

**Resolution**: *progressive disclosure with sentence-first hierarchy.* Layer 1 is always a sentence. Layer 2 adds structured evidence. Layer 3 is the full depth audit trail. The sentence is written so that if a parent stops there, they have an actionable conclusion. The chart under the sentence is written so that if a parent drills in, they find receipts for the sentence.

### Tension 4 — Platform vs. Real World

The platform's internal measurements (CEFR progress, skill rings) feel "not real" to many parents; real-world moments feel real but don't accumulate into a structured record.

**Resolution**: *the Value Hub connects the two.* Every AI-generated claim in the narrative links to a real-world-observable behaviour ("Mia can now tell a story about last weekend in English"). Every real-world moment captured by the parent gets threaded into the platform's record. The platform becomes the connection layer between internal measurement and external verification.

### Tension 5 — System vs. Child

No dashboard metric is as strong a retention signal as the child's own desire to continue. Yet most dashboards are designed as if the parent is the primary user.

**Resolution**: *the child's signal is a first-class surface.* The hub surfaces "the child enjoys this" as prominently as any adult-readable metric. Gamification exists for the child (avatar, rewards, streaks) *on the child side*; the hub surfaces its results for the parent without pressuring the child into performance-for-parent.

---

## 5. Cultural overlay

Full treatment in `cultural-analysis.md`. Summary: *jobs are universal, proof-types are culturally specific*. The Value Hub is one architecture with a culturally-tunable proof layer.

**Market-by-market highest-resonance proof-type**:

| Market | Hero proof-type | Dominant job at renewal |
|---|---|---|
| Russia | School ranking + opt-in competitive track | Reassurance-Seeker + Investor |
| Italy | Explicit value-per-euro math | Investor |
| Turkey | Real-world moment capture (travel, confidence) | Reassurance-Seeker |
| Poland | CEFR ladder + sub-level granularity | Navigator |
| Germany | Social-interaction milestones | Navigator + Co-Pilot |
| France | School-gap-complement + cultural content | Navigator |
| Spain | Shyness-overcome + Cambridge pathway | Co-Pilot + Reassurance-Seeker |

**What to block per market**:
- Peer ranking: default off in DE/FR/JP; opt-in only in RU/KR/ID/TR
- Cost-per-word: default visible in IT/TR; default hidden in FR/JP/DE
- Public sharing: private by default in JP/FR

**Universal regardless of culture**:
- Plateau pre-warning (SLA research is not culturally specific)
- Trust stack hygiene (transparent billing, teacher continuity)
- Progressive disclosure
- Bidirectional teacher channel
- Real-world moment capture (though with culturally-specific framing of what counts as "moment")

---

## 6. The plateau problem (in depth)

The single clearest design imperative from the combined corpus + theory + persona evidence is proactive plateau management. This section treats it separately because it is the product's most expensive silence, and the most asymmetrically cheap intervention available.

### What the evidence says

**From corpus**:
- *"Between month four and month eight it just felt flat. I actually considered canceling."* — Elena synth, corroborated by real quotes
- *"Month ten, roughly. Chiara had been learning for nearly a year and nothing about her interactions with English outside lessons had changed much. I did the spreadsheet thing. 45 lessons. 900 euros."* — Giulia (v2 persona), real-pattern confirmed in corpus
- Multiple 2–3★ reviews reference flat periods as cancellation triggers (Emil, Andrey, several TR complaints)

**From theory**:
- British Council SLA plateau research: intermediate plateau is a documented phase of language acquisition where comprehension outpaces production, creating the sensation of "not progressing" even when internal skills are developing
- PEAS scale: plateau anxiety is the single highest-correlation moment with parent-anxiety-transmitted-to-child, the mechanism that drives short-term churn

**From personas**:
- Every v2 persona described a plateau-like moment in their narrative (months 4–10 typically)
- Every one would have benefited from proactive plateau explanation
- Several (Elena, Giulia, Hans) explicitly said they almost cancelled during plateau and were rescued by *external* sources (sister with linguistics degree, school teacher) — not by the platform

### The intervention

A *proactive plateau protocol* — the moment the data detects (or calendars predict) a plateau is approaching, the parent receives a pre-registered message that:

1. Names the plateau explicitly ("we're entering a phase where outward progress will feel slower for 2–4 weeks — that's expected")
2. Explains the underlying mechanism ("comprehension develops before production; your child's brain is absorbing faster than it's outputting")
3. Credits the British Council research for authority (parents trust external citations in plateau moments — they're in audit mode)
4. Offers a bounded action ("one thing to try: record a 1-minute video of your child speaking, put it aside, compare in 4 weeks — you'll hear a difference even when daily speech hasn't changed")
5. Commits to a post-plateau check-in ("we'll message you in 4 weeks when your child typically starts producing the absorbed material")

### Why this works

It re-contextualises the parent's active anxiety into a expected-and-understood phase. It provides a diagnostic tool (the video comparison) that verifies progress is happening invisibly. It shifts the parent from "considering cancellation" to "watching for the breakthrough."

### Cost and feasibility

Cost: ~zero. One pre-written message, one trigger rule, one calendar reminder. Maybe a teacher-side flag for unusually severe plateaus that warrant a teacher call (higher-touch but rare).

### Inline validation

Plateau protocol should be validated as a standalone A/B. Arm A: current behaviour (no plateau messaging). Arm B: plateau protocol activated for parents who hit the trigger criteria (flat production data + typical-pattern calendar match). Primary outcome: 30-day churn rate. Secondary: parent NPS at 30 days. Directional hypothesis: churn-rate delta ≥ 5pp. This is the highest-expected-ROI single test in the entire Value Hub backlog.

---

## 7. The trust stack

The Value Hub cannot succeed as a data visualization while the underlying layers leak trust. Five stack layers; failure at any destroys the layers above.

| Layer | What breaks trust | Evidence | Fix |
|---|---|---|---|
| **5. Narrative** | AI-generated summaries disconnected from source data | Giulia (v2): *"If there's no 'based on what,' I don't trust the sentence"* | Every narrative claim traceable to source evidence (teacher note, lesson recording, AI-observed skill event) |
| **4. Data** | Metrics without interpretation, or uniform positivity | realjaew TR: *"notifications are always positive"* | Calibrated honesty, interpretation accompanying every metric, depth layer available |
| **3. Teacher continuity** | Teacher changes without parent input | Ozge (TR): *"trauma in our lives"*; Ksenia: *"told to choose others"* | Teacher-change workflow with parent consultation; "your teachers" history visible |
| **2. Platform reliability** | Unreachable platform, broken progress features | Filiz (TR): *"inaccessible for days"*; Oya: *"progress button requests audio"* | Reliability SLA visible; broken features block all product advancement |
| **1. Billing** | Auto-renewal surprises, pause-doesn't-pause, refund hostility | Rafal, Rafi Camhi, Jeanne Alexandre, multiple 1★ across all markets | Transparent billing, clear pause/cancel paths, no hostility |

**The stack point**: the Value Hub is Layer 4–5. Layers 1–3 are the foundation. Any Value Hub design that improves Layers 4–5 while Layers 1–3 remain broken is shipping a dashboard on top of a trust sinkhole. Reviews bear this out — parents with billing complaints never credit the platform with educational value, regardless of how well the dashboard performs.

**Design implication**: the Value Hub project, scoped strictly as a Layer 4–5 effort, will under-deliver on retention. The project should be scoped to include at minimum auditable fixes at Layer 1 (billing transparency) and Layer 3 (teacher continuity policy) as prerequisites to Layer 4–5 success.

**Inline validation**: before Value Hub pilot, measure trust-layer health via a 5-question "trust audit" survey sent to churned parents: "which of these was a factor in your cancellation?" If the bottom three layers dominate the response, the Value Hub by itself cannot fix retention.

---

## 8. The design implications summary

Consolidating the analysis into actionable design implications. Each maps back to the bridges, jobs, and tensions above.

| # | Design imperative | Bridges | Jobs primarily served | Source evidence |
|---|---|---|---|---|
| 1 | **AI Progress Narrative as hero** — one specific, falsifiable sentence at the top, layered with depth | 1 | All 6 (via progressive disclosure) | Corpus: DIY-tracking pattern; Personas: universal sentence-first preference; Theory: SDT competence |
| 2 | **Calibrated-honest teacher feedback** — one strength + one area to reinforce + forward thread, teacher name always present | 2 | Reassurance-Seeker, Auditor, Co-Pilot | Corpus: realjaew trust-break; Personas: Elena, Giulia, Hans on specificity |
| 3 | **Bidirectional teacher channel, bounded** — parent sends up to 2 sentences of context per week | 2 | Co-Pilot | Corpus: Carmentxu; Personas: Pilar explicitly |
| 4 | **Real-world moment capture** — voice note, photo, or short text → structured progress record | 4 | Co-Pilot, Reassurance-Seeker | Corpus: Greek menu / Marmaris moments; Personas: universal DIY-tracking |
| 5 | **Curriculum roadmap visible** — where you are, what's next, difficulty progression | 1, 5 | Navigator, Investor | Corpus: Nejat, Lyudmila; Personas: Elena, Sophie |
| 6 | **Proactive plateau protocol** — named, explained, bounded action | 1 | Reassurance-Seeker (primary), Investor | Section 6 above |
| 7 | **School-curriculum alignment per market** | 5 | Navigator, Investor, Reassurance-Seeker | Section 2.5 + corpus cultural patterns |
| 8 | **Child's-enjoyment signal surfaced** — per-lesson enjoyment, favourite topics, teacher-flagged moments | 3 | All 6 | Corpus: universal "child wants more" pattern |
| 9 | **Year-in-review artefact** — annual synthesis, culturally calibrated, shareable | 1, 3, 4, 5 | Reassurance-Seeker, Investor | Corpus: parents describe "what we tell grandma"; Personas: universal proud-moment narrative |
| 10 | **Trust stack hygiene prerequisites** — billing transparency, cancellation path, teacher-continuity policy | 2, foundational | All 6 | Section 7 |
| 11 | **Multi-child support** — side-by-side child panels by default, no child-switcher as primary UX | 1 | Sophie (v2) edge case, but generalisable | Personas: Sophie specifically; corpus has less signal here |
| 12 | **Non-English-speaker parent support** — weekly digest in parent's language, not child's | 2, 5 | Hans (v2) edge case | Personas: Hans specifically; theoretically grounded |

---

## 9. What was killed and why

The Value Hub design rejects several popular directions. Each rejection is evidence-backed.

### Killed — story-first dashboard (narrative without data receipts)

*Rationale*: Outsourcers and busy parents read sentences; Auditors and Investors need data underneath those sentences. A story-only hero without traceable evidence fails Auditor trust-tests ("if there's no 'based on what,' I don't trust the sentence" — Giulia v2). The corrected approach: narrative + linked evidence trail, not narrative alone.

### Killed — peer benchmarking as default

*Rationale*: culturally context-dependent. Valued in RU, KR, ID, TR under specific opt-in framings; actively harmful as default in JP, FR, DE, and for Pre-K parents across all markets. The earlier cultural-variation analysis (preserved from the old research, corroborated by open coding, re-anchored in Hofstede individualism dimension) is the basis. Opt-in, market-gated, never default.

### Killed — cost-per-word as universal metric

*Rationale*: embraced by Investors in IT, TR, ID, KR; rejected as "strange and dystopian" by FR, JP, DE, and by Reassurance-Seeker / Co-Pilot jobs across markets. Same reasoning as peer benchmarking — culturally and job-contextually specific. Opt-in, market-defaulted, never the hero surface.

### Killed — gamification of parent engagement

*Rationale*: directly harms the Outsourcer job (30% of our persona set, heavily represented in long-term-loyal corpus users). Streaks, badges, "you haven't visited" nudges are anti-patterns for the legitimate delegation strategy. No engagement metrics for parents.

### Killed — homework-manager workflows pushed to parent

*Rationale*: fails the Co-Pilot job by violating the "bounded participation" principle. Parents who want to participate will opt into 5-minute weekly activities; forcing 30-minute homework management turns Co-Pilots into angry Outsourcers or ex-customers (Margarita IT). Bounded, optional, high-quality prompts only.

---

## 10. Validation roadmap (inline throughout, consolidated here)

Every load-bearing design claim in this document carries an inline validation recommendation. Consolidated list for the Post-Launch Validation Roadmap section of the research document:

| Claim | Validation method | Sample | Cost | Timeline |
|---|---|---|---|---|
| AI Progress Narrative beats structured-card layout as hero | A/B test | 20 parents | Internal | 14 days |
| Plateau protocol reduces 30-day churn ≥ 5pp | A/B test, parents matching plateau-trigger criteria | 200 parents (100/arm) | Internal | 60 days |
| Calibrated honesty (one strength + one area) outperforms uniform positivity | A/B test, post-lesson note variants | 30 teachers / 300 parents | Internal + teacher training | 30 days |
| Bidirectional teacher channel reduces friction without overloading teachers | Pilot | 20 teacher-parent pairs | Internal | 30 days |
| Real-world moment capture is actually used at ≥50% active rate | Pilot | 50 parents | Internal | 28 days |
| Curriculum roadmap reduces Navigator-job-related complaints | Cohort comparison, complaints volume | N/A (service-ticket data) | Internal | 90 days |
| Six-jobs framework reflects real parent behaviour | Card-sort with real parents, 3 markets | 12 parents × 3 markets | ~$800 via UserInterviews.com | 14 days |
| Cultural proof-type hypotheses per market | Per-market A/B of hero configurations | 100 parents/market × 5 markets | Internal | 6 weeks |
| Non-English-speaker parent digest (Hans edge case) improves DE/PL retention | A/B by language | 200 parents | Internal | 60 days |
| Trust-stack diagnosis | 5-question "trust audit" to churned parents | 500 churned parents (all markets) | ~$0 — internal survey | 2 weeks |

**Budget-conservative minimum validation before production**: the first three tests (AI Narrative hero, plateau protocol, calibrated honesty) cover the highest-impact claims and can run in ~90 days on internal infrastructure with ~$0 external spend. The six-jobs card-sort (~$800) is the single most valuable external validation. Total external cost for a rigorous pre-production validation: under $3,000.

---

## 11. Post-launch validation recommendation (the test-assignment caveat)

This research is a design hypothesis grounded in 128 real quotes + 9 established frameworks + 8 generative persona interviews. It has not been validated with real Novakid parents. Before production commitment:

- **Minimum**: card-sort of the six jobs with 12 real parents across 3 markets ($800)
- **Recommended**: the top 3 validation tests in Section 10 (all can run on internal infrastructure, ~90 days)
- **Comprehensive**: full Section 10 table as 12-month pre/post-launch validation roadmap

The jobs, tensions, and bridges framework is robust to partial validation failure. If one job turns out to be unused, the architecture still serves the other five. If the plateau protocol A/B underperforms, the broader Value Hub still addresses the five silos. The framework is built to be falsifiable layer by layer, not all-or-nothing.

---

## Appendix — Source map

Each claim in this document is anchored in specific files. This is for auditability.

| Section | Primary sources |
|---|---|
| §0 Methodology reset | `open-coding-analysis.md`, `archetypes.md` §Appendix |
| §1 Executive summary | Combined evidence synthesis |
| §2 Five silos | Open coding Theme A, `additional-parent-quotes.md`, `theoretical-frameworks.md` |
| §3 Jobs × Bridges | `archetypes.md` (jobs), bridges from §2 |
| §4 Five tensions | Open coding Themes B, D, F + persona synthesis D-section responses |
| §5 Cultural overlay | `cultural-analysis.md` |
| §6 Plateau protocol | Theoretical frameworks §9 (British Council), corpus plateau patterns, persona universal signals |
| §7 Trust stack | Open coding Theme E, §§trust-break codes, corpus 1–2★ reviews |
| §8 Design implications | Integrates all prior sections |
| §9 Kill-list | Cultural analysis + corpus rejection signals + persona explicit rejections |
| §10 Validation roadmap | Per-claim inline recommendations throughout document |

---

## Appendix — Persona interview index

| # | File | Primary job | Secondary | Market | Edge case |
|---|---|---|---|---|---|
| 1 | `persona-interviews/01-elena-warsaw.md` | Navigator | Auditor | PL | — |
| 2 | `persona-interviews/02-umut-istanbul.md` | Reassurance-Seeker | Co-Pilot | TR | Father-primary |
| 3 | `persona-interviews/03-giulia-milan.md` | Investor | Auditor | IT | Price-sensitive |
| 4 | `persona-interviews/04-anastasia-moscow.md` | Reassurance-Seeker | Outsourcer | RU | — |
| 5 | `persona-interviews/05-hans-munich.md` | Co-Pilot | Navigator | DE | Non-English parent |
| 6 | `persona-interviews/06-pilar-madrid.md` | Co-Pilot | Reassurance-Seeker | ES | Shy child |
| 7 | `persona-interviews/07-ahmed-riyadh.md` | Outsourcer | Investor | MENA | Long-term loyal |
| 8 | `persona-interviews/08-sophie-lyon.md` | Navigator | Reassurance-Seeker | FR | Two children, different levels |
