METHODOLOGY
How Your Skill Score Works
Full transparency on how we measure, weight, and report your professional performance.
Composite Skill Score
Your Skill Score is a single weighted metric across five professional dimensions, computed weekly.
Skill Score = (Typing × 0.25) + (Writing × 0.25) + (Resume × 0.25) + (Reading × 0.15) + (Consistency × 0.10)
25%
Typing Performance
WPM, accuracy, consistency, and error rate in professional contexts.
25%
Writing Quality
Clarity, authority, brevity, structure, tone, and actionability of executive communications.
25%
Resume Impact
Quantification, impact framing, executive presence, and ATS compatibility.
15%
Reading Performance
Speed, comprehension, and information extraction from professional material.
10%
Consistency Index
Cross-module engagement regularity: 100 × (1 − CV), where CV = standard deviation / mean over ≥4 weeks.
Coverage Factor
If you haven't used all five modules in a given week, your score is adjusted with a coverage factor:
effective_score = raw_weighted_sum × (0.6 + 0.4 × (modules_active / 5))
This prevents inflated scores from a single high-scoring module while still rewarding partial engagement.
Weekly Snapshots
- ✓Snapshots are computed once per week (Sunday batch job), not on-demand.
- ✓Free tier: Current-week snapshot only. History is not retained.
- ✓Pro and above: Full history stored permanently. Lifetime average and trend lines available.
- ✓Snapshots use a rolling window of the current week's assessments only.
Weighted Performance Index (WPI)
For users with 6+ weeks of history, we compute the WPI for benchmarking:
engagement_factor = min(1.0, weeks_active / 12)
consistency_multiplier = 0.7 + 0.3 × engagement_factor
WPI = lifetime_skill_average × consistency_multiplier
- • Percentile = (users_below + 0.5 × users_equal) / total × 100
- • Benchmarks activate after ≥500 paying users in the cohort.
- • Both global and paying-user cohorts are computed separately.
- • 12-week engagement cap prevents gaming through duration alone.
Known Limitations
- ⚠Scores are relative to our assessment rubrics, not externally validated certifications.
- ⚠Typing tests measure isolated typing, not in-context workflow speed.
- ⚠Writing analysis uses structured heuristics, not human expert evaluation.
- ⚠Benchmarking requires a minimum cohort size; early data may shift as the user base grows.