How the success of the new game is assessed
The "success" of the game is not one income schedule. This is an agreed set of goals and metrics: we understand what value we carry, how much it costs to deliver it, how often the player returns, and how stable technology and compliance are. Below is a practical framework used by producers, analysts and marketing in the first 90-180 days of the project's life.
1) Start with goals and North Star
1. Product Vision → North Star Metric (NSM). One metric that reflects the value to the player (for example: "the number of completed target sessions per user per week" or "hours of meaningful gameplay on the MAU").
2. Metrics ladder: NSM is supported by the following units:- Product/Behavior: onboarding, activation, retention.
- Finance: ARPDAU/ARPPU, LTV, margin.
- Marketing: CPI, ROAS, payback.
- Technique: crash/ANR, latency, uptime.
- Quality/reputation: rating, NPS/CSAT, complaints.
- Compliance/RG: coverage of limits, timeliness of interventions.
2) Main product indicators
Activation (FTUE): The share that completed the tutorial/first target action (FTB/buy/match).
Retention: D1 / D7 / D30; sticky factor = DAU/MAU.
Frequency and depth: sessions/day, median duration, mastering key features.
HEART Engagement: Happiness (CSAT/NPS), Engagement, Adoption, Retention, Task Success.
Onboarding quality signals: time-to-value ≤ 90 seconds, ≥80% see the basic mechanics, bounce on the 1st screen ≤ the target threshold.
3) Income and monetization
ARPDAU/ARPPU - daily revenue per asset/payer.
Conversion to payers, frequency and AOV.
LTV (lifetime value) by cohort. Practice: prediction of pLTV for D3/D7 (gamma/Waybull, BG/NBD, ML-regression) followed by reconciliation for D30/D60/D90.
Structure: revenue share by source (IAP, advertising, subscriptions), by segment and region.
4) Marketing and payback
CPI (cost per install), CTR/IR (creative conversion), share of organics.
ROAS Dx (return on ad spend to day x) and Payback (payback days).
CAC/LTV: the project is scaled if LTV ≥ k· CAC (k ≥ 1. 5-3 depending on risk and horizon).
Attribution and incrementality: geo-split, holdout, MMM as backup under tracking restrictions.
5) Technological health
Crash rate/ANR (Android), fps, p95/p99 latency of key APIs.
Server uptime, payment errors, matchmaking/download duration.
Release stability: regression defects, fixes speed, share of safe rollbacks.
6) Reputation and quality of experience
Rating of the store, share of 1 responses, response time to reviews (<24 hours).
CSAT/NPS, support topics, average ticket resolution time.
Social signals: mentions, tonality, coverage.
7) Compliance and Responsible Gaming (RG)
Coverage of RG tools: time/expense limits, reality checks, self-exclusion.
Timeliness of interventions and reduction of risk patterns after intervention.
Privacy/age/geo policies: share of correctly blocked bans, lack of "dark patterns" in UX.
8) Cohort analysis and curve reading
Build cohorts by installation date, channel, region, platform.
Ideal retention curve: rapid shelf stabilization after D7/D14.
If D1 ↑ without D7/D30 growth, sugar activation (unstable stimuli) is likely.
LTV curves: compare the area under the revenue curve/retention ratio; avoid averaging - see sections.
9) Benchmarks by stage (landmarks, not dogma)
Soft launch (weeks 1-4): retention is growing, crash Public launch (weeks 5-8): stable D7/D30, predictable CPI, ROAS D7 in the plan corridor. Stabilization (weeks 9-12): payback fits into the strategic horizon, the share of technical problems in tickets falls, organic matter grows. 10) Experiments and statistics A/B tests: Pre-fix hypothesis, success metric and stopping criteria. 11) Financial Model: Early Months P&L Revenue − payment/platform commissions − marketing − server/content/support = operating margin. Scenario analysis: baseline/optimistic/stress. Unit-economics by channel: pLTV_i, CPI_i, maximum traffic purchase rate. 12) First-order dashboards 1. Basis: DAU/WAU/MAU, D1/D7/D30, ARPDAU/ARPPU, conversion to payment, LTV cohorts. 2. Fanels: onboarding → activation → key features → monetization. 3. Marketing: CPI, ROAS Dx, payback, organic. 4. Technique: crash/ANR, latency, uptime, payment errors. 5. Quality: rating, NPS/CSAT, ticket topics, response time. 6. RG: coverage of limits, time-to-interval, reduction of risk patterns. 13) Qualitative research (why the metric is) Usability sessions for key scenarios (5-8 respondents) - "where they get stuck." JTBD interview - what job the player is "hiring." Surveys in product: CES/CSAT after critical pathways. Feedback analysis: cause 1 clusters, quick fixes and communication. 14) Outcomes decisions: What to do if... Low D1: speed up time-to-value, shorten steps, improve tutorial and empty states. There is D1, but a weak D7/D30: work on the meanings of the return - events, seasons, social mechanics, "continue from the spot." High CPI: reassembly of creatives, target, ASO, search for new channels/geo. ROAS does not agree: reduce procurement to profitable channels, raise conversion to payment, work on ARPPU/frequency. Crash/performance: priority to point optimizations and release stability; canary calculations. 15) Mini formulas and cheat sheets Sticky: DAU/MAU. ARPDAU: revenue per day/DAU. Payback (days): minimum x at which ROAS Dx ≥ 100%. Dx pLTV score: (pLTV\approx\sum _ {t = 1} ^ {x} ARPDAU_t) adjusted for decreasing and seasonality. Cohort retention: (R_t =\frac {\text {assets on day t}} {\text {established on day 0}}). 16) Large Success Checklist Assessing the success of a new game is a cyclical process: formulate value, accurately measure the path to it, experimentally improve, keep quality and ethics. Teams that look at the game through cohort metrics, pLTV and technology sustainability make the right product decisions and achieve payback without "burning" the budget.
Goals and metrics
Instrumentation and data
Quality and technique
Research and UX
Experiments
Compliance/RG