Version: 0.6.0 Author: Devon Stanton Date: January 2026 License: CC BY 4.0
The video game industry relies on informal labels (Indie, AA, AAA) to describe projects and their development contexts. These terms are culturally defined rather than structurally defined, leading to inconsistency, unfair competition, and a lack of transparency.
The Game Project Classification Standard (GPCS) proposes a formal, bond-style rating system that rates projects, not studios.
Terminology: This whitepaper defines the Game Project Classification Standard (GPCS). In shorthand, the resulting per-project label is referred to as a GPC rating (e.g., A/I1 — Verified — v0.6). Drawing inspiration from financial credit ratings, GPCS uses the familiar AAA/AA/A/BBB/BB/B/C scale to indicate production capacity and resource backing. The framework evaluates contributing sources independently (studios, publishers, funders) and combines them into a composite project rating.
Rather than a single fuzzy label, projects are rated through:
The framework is:
This white paper outlines the problem, presents a comprehensive framework proposal, and invites testing, feedback, critique, and experimental implementation by interested stakeholders. The author plans to pilot GPCS through an awards programme first, gathering real-world implementation data before broader industry engagement. This is version 0.6, a comprehensive proposal that will evolve based on testing experience and community feedback.
APA Format: Stanton, D. (2026). Game Project Classification Standard (GPCS): A Bond-Style Rating System for Game Projects (Version 0.6). https://koldfu5ion.github.io/gpcs/
Implementation Attribution: Organisations implementing GPCS should credit: “Classification system based on Devon Stanton’s Game Project Classification Standard (GPCS v0.6)”
License: This work is licensed under CC BY 4.0. You are free to adapt and implement this framework with attribution.
Feedback and Contributions: The author welcomes feedback, critique, and proposals for refinement. Contact information and contribution guidelines are available at https://koldfu5ion.github.io/gpcs/
For platform operators, awards bodies, grant programmes, and publishers evaluating GPCS adoption.
A voluntary, transparent rating system that classifies game projects (not studios) based on production capacity and resource backing. Projects receive a dual-code rating indicating:
Example GPC Rating: A/I1 — Verified — v0.6
Certificate Breakdown:
10-minute self-rating form with 12 questions using bracketed ranges (no exact figures required):
No sensitive financial disclosure required. All inputs use safe ranges (e.g., “$1M-$5M” not “$2.3M”).
Three tiers:
Evidence Disclosure: If evidence cannot be provided for specific claims, those fields are marked “Undisclosed” or “Withheld” on the certificate. Projects cannot claim Verified status whilst withholding key evidence. Partial verification is transparent.
Awards bodies can require Verified+; grants can require Audited. Appropriate verification level depends on use case stakes.
1. Awards Bodies: Use GPCS capacity tiers for fair competition categories while keeping public-facing names intuitive, with the tier criteria shown as a subtitle or judging note:
2. Grant Programmes: Use GPCS source ratings for eligibility criteria
3. Platforms: Integrate GPC ratings into discovery and developer programmes
GPCS operates as B2B infrastructure—providing verifiable classification when precision matters. Consumer-facing marketing remains entirely within publisher control: “AAA experience,” “indie darling,” or “AAAA” are all fair game.
The value for publishers: when every competitor claims “AAA quality,” the term loses signal. A verified GPC rating provides credible backing for marketing claims, protects premium positioning from inflation, and enables strategic portfolio differentiation across tiers.
See full whitepaper for detailed methodology, governance, use cases, and references.
For skim readers: GPC ratings describe production capacity and resource backing. They do NOT measure:
Examples:
GPCS provides context, not judgement. A C-rated masterpiece is still a masterpiece. A AAA-rated failure is still a failure. The rating helps stakeholders understand the production scale and resource environment, not the outcome.
Companion Documents:
The video game industry has grown from a niche hobby into a global entertainment sector generating over $180 billion annually Newzoo Global Games Marketing Report 2025. Yet despite this maturation, the industry lacks fundamental infrastructure that other creative sectors take for granted: a standardised way to classify and compare projects based on their production contexts.
Film has its budget tiers and production categories. Music has major labels, independent labels, and self-released artists with clear distinctions. Publishing distinguishes between the Big Five, mid-size houses, and small presses. The game industry, by contrast, relies on vague, culturally-defined terms that emerged organically and have never been formalised.
The terms “Indie,” “AA,” and “AAA” originated in the early 1990s as the industry began distinguishing between different scales of production. Initially useful shorthand, these labels have become increasingly inadequate as the industry has diversified. A solo developer working from their bedroom, a 30-person studio with venture capital backing, and a 15-person team bootstrapping their first commercial release might all describe themselves as “indie,” despite having fundamentally different resources, constraints, and contexts.
This terminology gap creates practical problems across the industry: unfair competition in awards, unclear eligibility for grants, inconsistent media coverage, and difficulty for studios to position themselves accurately. The Game Project Classification Standard (GPCS) addresses this gap by proposing a formal, multi-dimensional taxonomy built on verifiable structural criteria rather than subjective cultural labels.
The game industry has no standard system for classifying projects by production scale and resource backing. The labels we use, Indie, AA, and AAA, emerged informally and mean different things to different people.
This absence of structure creates real problems:
The industry deserves better.
There is no universal agreement on what Indie, AA, or AAA mean. Different sources cite different thresholds (Humble Games, 2022; Pearce, cited in Wikipedia):
The same studio could be classified differently depending on which criteria you prioritise.
Current classifications muddle together distinct variables:
| Dimension | What It Measures |
|---|---|
| Budget | How much money is spent |
| Team size | Number of developers |
| Funding source | Self-funded vs publisher-backed |
| Creative independence | Who controls creative decisions |
| Production values | Visual fidelity, scope, polish |
| Distribution | Digital-only vs retail presence |
A single label cannot meaningfully capture all of these. A 40-person studio with publisher funding is fundamentally different from a solo developer, yet both might call themselves “indie.”
The current system creates absurd groupings:
Without clear classification:
Case Study: The Game Awards 2025 – Megabonk
A nominated title was withdrawn from the “Best Debut Indie Game” category in November 2025 when the developer (Vedinad) clarified they had previously shipped games under other studio names. The developer stated: “It’s an honour, but I don’t think it qualifies for debut indie game… I’ve made games in the past under different studio names, so Megabonk is not my debut game” (Kotaku, 2025). The Game Awards producer Geoff Keighley confirmed the removal, marking the first voluntary withdrawal in the ceremony’s history.
This incident highlights how even high-profile awards lack consistent, verifiable definitions for terms like “debut” and, more broadly, “indie.” The eligibility determination fell to the creator’s integrity rather than transparent structural criteria, forcing retroactive corrections after public nominations were announced. GPCS-style structural classification combined with a simple “debut” eligibility flag (first commercial release from this legal entity/studio name) would prevent category mismatches before nominations, eliminating both the burden on creators and the credibility damage to awards bodies.
Current labels are static. There is no mechanism to:
Before diving into the framework’s technical details, here is how GPCS directly addresses each problem identified above:
| Problem | GPCS Solution |
|---|---|
| Studios cannot position themselves accurately | Verifiable source ratings provide clear, comparable positioning based on structural criteria (team size, infrastructure, track record), not subjective labels |
| Awards pit mismatched studios against each other | Tier-based categories ensure projects compete against peers with similar resource contexts—a C-rated solo project competes with other C-rated projects, not A-rated 30-person studios |
| Publishers and investors lack meaningful segmentation | Standardised source ratings enable portfolio filtering (“show me BBB-rated projects seeking AA publisher support”) and punch-above-weight identification |
| Grant programmes struggle to define eligibility | Clear structural criteria replace vague terms—”Eligible: C/B/BB-rated studios only” is verifiable and defensible |
| Press and analysts report inconsistently | Common terminology with transparent definitions enables consistent coverage and meaningful cross-source comparison |
| Players lack production context | GPC ratings communicate resource backing without requiring budget disclosure, helping players calibrate expectations |
The framework that follows is designed to deliver these solutions through transparent, voluntary, and non-invasive classification.
The GPCS addresses these failures through a source-based project rating system.
Unlike traditional approaches that classify studios as “indie” or “AAA,” GPCS rates individual projects. This reflects the reality that studios commit different resources to different projects, and the same studio may simultaneously have projects operating at different scales with different goals.
Studios do not have a single GPC rating; projects do. Studios are evaluated as sources that contribute to project ratings, but they themselves are not classified or ranked in aggregate.
A studio does not have a single fixed classification. Instead, each project receives a rating based on the sources contributing to it: the studio’s capacity, publisher support, funding backing, and other material contributions.
GPCS adopts the bond rating nomenclature familiar from financial markets: AAA, AA, A, BBB, BB, B, and C (with modifiers + and - where applicable).
The public-facing scale is: AAA, AA, A, BBB, BB, B, C (with + and - modifiers where applicable). This provides a simplified 7-tier structure that maintains clarity and avoids over-segmentation at lower tiers, where distinctions are less meaningful for industry use cases.
Why borrow from financial ratings?
In financial markets, bond ratings indicate creditworthiness and default risk. Standard & Poor’s, Moody’s, and Fitch use these scales to communicate the quality and risk profile of debt instruments. AAA represents the highest quality and lowest risk; ratings decline through investment grade (BBB and above) into speculative grade (BB and below).
For game projects, GPCS borrows only the scale, not the underlying concept of financial risk. The ratings indicate production capacity and resource backing, not creditworthiness, commercial viability, or default probability.
Rationale for simplified lower tiers: Financial rating systems extend to CCC, CC, and C for granular distinctions in distressed debt scenarios. GPCS uses a single C tier for solo developers and micro-teams because further sub-segmentation provides little practical value in game industry contexts. The difference between a solo developer and a 2-person partnership is less relevant to awards categories, grant eligibility, or platform curation than the difference between C-tier and B-tier projects.
Key distinctions:
| Bond Ratings (S&P, Moody’s, Fitch) | GPCS Capacity Ratings |
|---|---|
| Assess creditworthiness and default risk | Assess production capacity and resource backing |
| Predict likelihood of repaying debt | Describe studio infrastructure and funding context |
| Used by investors to manage financial risk | Used by stakeholders to understand production scale |
| Lower rating = higher risk of default | Lower rating = smaller scale/fewer resources (not higher risk) |
What GPC ratings mean:
Why this nomenclature?
The game industry already uses “AAA” and “AA” informally, but without clear definitions or a complete scale. GPCS formalises this familiar terminology into a structured framework, avoiding the adoption friction that entirely new terminology would face. The scale is recognisable and intuitive, even though the underlying measurement differs from financial credit ratings.
Throughout this document, we refer to GPCS outputs as capacity ratings to distinguish them clearly from financial credit ratings, while acknowledging the borrowed nomenclature.
Projects are rated through a three-layer process:
This separation enables precise, verifiable assessment. A publisher cannot “make” a project AAA through marketing alone; the studio must have the capacity to execute at that scale. Conversely, a highly capable studio self-publishing will reflect that independence in the rating.
GPCS provides two complementary outputs for each project:
Why separate capacity from independence?
The “indie” discourse conflates two distinct dimensions: scale (how big is the production?) and independence (who controls creative decisions and owns the IP?). A large, well-funded studio can be creatively independent. A solo developer can be financially dependent on publisher deals. Separating these dimensions provides clarity:
Output format:
Example:
The independence marker is detailed in Section 5.5 and Section 6.4.
Prospective Rating: Projects can be rated before release, during production, or even at concept stage. This enables practical use cases that retrospective systems cannot support: grant eligibility, award category placement, publisher scouting, and studio positioning.
Transparent and Verifiable: All source rating criteria are public. Any stakeholder can understand how a project would be rated before participating.
Context-Aware: The same studio can have multiple projects with different ratings. A large studio may incubate a small experimental project (rated B or BBB) alongside a flagship title (rated AAA).
Non-Invasive: Projects are rated through bracketed ranges and structural indicators, not exact financial disclosure. Studios never need to reveal confidential business information.
Rating must be voluntary, verifiable, and based on proxy metrics rather than confidential financial disclosure.
Projects should not need to reveal exact budgets or revenue. Rating relies on safe, non-sensitive bands and transparent structural variables.
Rating must be prospective, not just retrospective.
Unlike systems that classify games after release based on output metrics, GPCS rates projects at any stage: before development begins, during production, or after release. This enables grant eligibility determination, award category placement, publisher scouting, and project positioning from day one.
Projects are rated individually; studios are sources.
A studio’s capacity contributes to a project’s rating, but studios are not themselves rated in aggregate. This reflects the reality of variable resource commitment across projects.
To support searchability, registry lookups, and durable references across databases, each project should be assigned a stable project identifier that is separate from its GPC rating label.
Canonical ID (GPC-ID):
gpc_id (recommended: UUIDv7).Short ID (for humans):
Example badge line:
GPC A/I1 • Verified ●●○ • Release • v0.6 • ID: 01JH8M3KQ4Projects are rated by evaluating the contributing sources independently, then combining them. This section outlines the criteria used to rate each source type.
Studios are rated based on their capacity to execute a project, independent of external support. Studio ratings reflect infrastructure, track record, team size, financial stability, and operational maturity.
Key Variables:
| Variable | Description | Indicators |
|---|---|---|
| Team Size | Core development headcount dedicated to projects | 1–5 / 6–20 / 21–80 / 80–200 / 200+ |
| Infrastructure | Production capabilities, departmental structure, tools/pipelines | Ad-hoc / Emerging / Established / Industry-leading |
| Track Record | Shipped commercial titles, proven execution at scale | No releases / First title / Multiple titles / Proven AAA portfolio |
| Financial Health | Stability, ownership structure, funding runway | Self-funded / Grant-backed / Investor-backed / Owned by platform/major publisher |
| Geographic Footprint | Office presence, distributed capabilities | Home-based / Single office / Multiple offices / International presence |
Team Size Definition:
Team size counts core full-time equivalent (FTE) staff dedicated to projects. This includes:
Peak vs. Average: Use the average headcount during active development, not peak headcount at crunch or ramp-up periods. If team size varies significantly across development phases, use the sustained team size for the majority of production.
Multi-project studios: For studios working on multiple projects simultaneously, count only the headcount dedicated to the specific project being rated, not total studio headcount. If teams are shared across projects, allocate proportionally based on time/resource commitment.
Studios are rated from AAA (200+ staff, proven track record, platform ownership or major publisher backing) down to C (solo developers, part-time hobbyist projects). See Section 6.1 for detailed tier definitions, examples, and modifier guidance.
Publishers and funders are rated based on their capacity to support and elevate a project through resources, distribution, and market reach.
Key Variables:
| Variable | Description | Indicators |
|---|---|---|
| Financial Scale | Typical budget ranges supported | <$50K / $50K–$250K / $250K–$1M / $1M–$5M / $5M–$30M / $30M+ |
| Market Reach | Distribution capabilities, retail presence, marketing | Digital-only / Regional retail / Global distribution / Platform co-marketing |
| Support Services | Production support, QA, localisation, live ops, community management | Advisory only / Basic services / Full production support |
| Platform Relationships | Preferential access, co-marketing deals, featured placement | Standard terms / Good standing / Preferential treatment |
| Portfolio Scale | Concurrent projects, breadth of support | 1–2 projects / 3–10 projects / 10+ concurrent projects |
Publishers/funders are rated from AAA (Sony, Microsoft, EA, Take-Two scale: $50M+ budgets, global reach, full services) down to C (micro-grants, community support, non-financial backing). Detailed tier definitions are provided in Section 6.
Publisher Transparency: Named vs. Confidential Publishers
GPCS distinguishes between publicly disclosed publisher relationships and undisclosed financial backing:
Named Publisher: The publisher relationship is publicly disclosed through press releases, publisher website listings, store pages, or official announcements. Examples:
Confidential Publisher: The project receives significant financial backing or publishing services, but the relationship is not publicly disclosed. This includes:
Self-Published: No external publisher or major funder involved. Project is funded through studio revenue, personal savings, small grants, or community funding (Kickstarter, Patreon).
What this dimension captures:
The Named/Confidential distinction is descriptive, not evaluative. It does not affect the publisher’s capacity rating (a Confidential AAA publisher is still rated AAA). Instead, it provides transparency about:
Important: Confidential Publishers operate under non-disclosure agreements or choose not to publicly announce their involvement during development. This is a standard business practice and does not imply deception, but projects should acknowledge backing exists even if the specific entity cannot be disclosed.
Publisher Entity Tier vs. Deal Contribution Tier
GPCS distinguishes between:
For project rating purposes, the Deal Contribution Tier is used. A AAA publisher providing limited support (small budget, minimal services) contributes at a lower tier than their entity rating. This prevents logo-only deals from artificially inflating project ratings.
The questionnaire captures contribution detail through:
These inputs are combined into a support intensity classification (Light Touch / Standard / Full). Section 7.3 documents exactly how each classification affects the ceiling constraint. Summary:
Deal contribution is treated as a continuous variable; adding services/funding during development triggers re-rating to reflect the new contribution tier.
Certificate/Badge Detail: Full project certificates show:
Shorthand code: The industry shorthand remains capacity-focused: “A/I1” (A-rated capacity, I1 independence tier). The certificate provides full context.
Beyond studios and publishers, additional sources may contribute to project ratings:
Platform Holder Support: ID@Xbox dev kit grants, PlayStation Partners support, Epic MegaGrants. Typically rated BB-BBB depending on scale of support.
Regional/Government Grants: Film Victoria, Creative Europe, CMF (Canada), UK Games Fund. Typically rated BBB-A depending on grant size and support services provided.
Community Funding: Kickstarter, Patreon, Fig. Rated B-BBB based on amount raised and ongoing community support structure.
Technology Partners: Epic (Unreal), Unity, middleware providers offering licensing deals. Treated as modifier influence rather than direct source rating (reduces effective costs, elevates production value potential).
Co-development partners and outsourcing studios represent significant production capacity that should be captured in project ratings. These entities provide dedicated headcount and specialized expertise that materially affects a project’s execution capability.
Definition: Co-development refers to external studios contracted to perform substantial portions of production work (art, engineering, audio, VFX, etc.) on an ongoing basis, distinct from the primary studio executing the project.
When to rate as co-development source:
Rating co-development partners:
Co-development sources are rated similarly to studio sources, based on:
| Variable | Description | Indicators |
|---|---|---|
| Headcount | Dedicated staff assigned to this project | 1–5 / 6–20 / 21–80 / 80–200 / 200+ |
| Service Scope | Breadth of production contribution | Art only / Engineering only / Multi-discipline / Full production partnership |
| Track Record | Proven delivery at scale | First project / Multiple projects / Proven AAA co-dev portfolio |
| Contract Duration | Length of engagement | <6 months / 6–18 months / 18+ months / Multi-year partnership |
| Infrastructure | Production capabilities and specialization | Generalist / Specialized (art, audio, VFX) / Full-service co-dev |
Co-development tier examples:
How co-development affects project rating:
When a project involves significant co-development:
Examples:
Implementation Note: Co-development rating is an evolving area. Version 0.6 recognises these scenarios and provides initial guidance. Future versions will refine based on industry feedback and real-world project submissions.
Outcome metrics reflect performance after a project launches. These are optional and enable reclassification over time, creating a lifecycle view rather than a static rating.
Important: Ratings Are Descriptive, Not Predictive
GPC ratings describe the production capacity and resource context available during development. They do not predict or guarantee commercial success, critical reception, or player traction. A C-rated project can outsell an AAA-rated project; a B-rated game can win more awards than an A-rated title. Outcome tracking exists to highlight projects that “punch above their weight” and to study correlations, not to retroactively adjust capacity ratings. Higher revenue or acclaim does not mean the project should have been rated higher— it simply means the team delivered exceptional results within its resource band.
| Variable | Description | Example Bands |
|---|---|---|
| Revenue | Commercial performance | R0: <$100K / R1: $100K–$1M / R2: $1M–$10M / R3: $10M–$50M / R4: $50M+ |
| Player Base | Audience reached | CCU, DAU, or units sold brackets |
| Critical Reception | Quality recognition | Review score bands (Metacritic, OpenCritic) |
| Growth Signals | Trajectory indicators | Hiring expansion / follow-up funding / sequel greenlit / DLC/live ops launched |
Rule 1: Outcomes NEVER change the original prospective rating
The project’s initial capacity rating (AAA, AA, A, etc.) is fixed at the time it is assigned and reflects the resources and backing available during development. Post-launch outcomes do NOT retroactively change this rating.
Example:
Why: The prospective rating describes production context, not outcome. Changing ratings retroactively based on success would confuse the framework’s purpose and make historical comparisons meaningless.
Rule 2: Outcomes MAY influence future project ratings from the same sources
When outcome metrics indicate significant growth or change, subsequent projects from the same studio may be rated differently based on evolved capacity.
Example:
Rule 3: Outcomes are used for research, benchmarking, and lifecycle tracking
Outcome metrics enable valuable analysis:
Rule 4: Re-rating during development refers to milestone-based updates, NOT post-launch changes
The whitepaper references “re-rating at milestones.” This means:
Allowed:
Not allowed:
Rationale: Re-rating during development reflects real changes in resources and backing. Post-launch outcomes reflect market performance, not production capacity.
Live service / long-running projects: Evergreen titles (e.g., Minecraft, PUBG, Escape from Tarkov) that continue active development for years can treat major structural events—studio acquisition, massive team expansion, or a shift to platform-holder ownership—as new milestones. When the ongoing development enters a materially different capacity tier (e.g., solo creator acquired by Microsoft and supported by a 200+ person team), the project may receive a new rating reflecting the updated sources, while the original pre-acquisition rating remains in the historical record.
Hollow Knight (Team Cherry):
This demonstrates how outcome tracking works: original rating fixed, but studio evolution recognised in future projects.
The independence marker provides a secondary classification dimension that addresses the “indie” discourse without conflating independence with capacity.
What independence measures:
Control rights dimension: Independence accounts for both IP ownership and decision-making authority. Control rights include:
Projects indicate control rights in the questionnaire (see Q14). Independence tiers are assigned only when both IP and control signals align with the tier definition. If a publisher retains veto rights over creative fundamentals, the project cannot be classified as I1 even if IP technically remains with the studio.
Independence tiers (I0–I3):
Important clarifications:
How independence is determined:
Projects self-report independence status based on:
Verification tiers (Unverified/Verified/Audited) apply to independence claims just as they do to capacity ratings. For Audited verification, contracts and ownership documents are reviewed by third-party auditor to confirm independence classification.
Examples:
This section provides the detailed tier definitions for each source type. Section 5 introduced what variables are measured; this section specifies how those variables combine into tier classifications, with real-world examples and modifier guidance.
Each tier below specifies the typical indicators, real-world examples, and modifiers (+/-) that adjust classification at tier boundaries.
Indicators:
Examples: Ubisoft Montreal, EA DICE, Sony Santa Monica, Rockstar North, Bethesda Game Studios
Modifier guidance:
+ Expanding (hiring wave, new studio openings, parent company investment increase)- Contracting (layoffs, studio closures, restructuring, reduced parent investment)Indicators:
Examples: Larian Studios (pre-Baldur’s Gate 3 scale-up), IO Interactive, Paradox Development Studio, Obsidian Entertainment
Modifier guidance:
+ Growing (recent commercial hit, active hiring, external validation, new funding round)- Challenges (project cancellations, funding concerns, key talent departures, revenue difficulties)Indicators:
Examples: Many “AA game” developers, boutique studios spun out from AAA, successful indie scale-ups entering mid-tier production
Modifier guidance:
+ Scaling up (active hiring, second project greenlit, series funding secured, publisher deal signed)- Plateau (difficulty scaling beyond current size, funding runway shortening, hiring challenges)Indicators:
Examples: Studios post-successful Kickstarter campaign, government grant recipients with commercial ambitions, second-time founders with industry experience
Modifier guidance:
+ External validation (grant awarded, publisher interest expressed, festival selections, community traction)- Funding uncertainty (runway <12 months, key staff transitioning to part-time, revenue concerns)Indicators:
Examples: First-time studios with experienced individuals, game jam winners scaling up, established hobbyist teams transitioning to commercial
Modifier guidance:
+ Growing (converting contractors to full-time, grant applications advancing, community funding secured)- Unstable (team members departing, funding challenges, scope reduction required)Indicators:
Examples: Passion projects transitioning to commercial ambitions, experienced developers going indie, student/hobbyist teams with commercial aspirations
Modifier guidance:
+ Transitioning to full-time, community traction growing, crowdfunding campaign successful- Losing momentum (members leaving, side-project status solidifying, progress slowing)Indicators:
Examples: Solo developers, small partnerships, most itch.io creators, student projects, micro-teams (2-4 people working part-time)
Modifier guidance:
+ Building momentum (Patreon or community support established, planning transition to full-time)- At risk of abandonment (activity slowing, life circumstances changing, interest waning)Publishers and funders are rated based on their capacity to support and elevate projects through resources, distribution, and market reach.
Indicators:
Examples: Sony Interactive Entertainment, Microsoft (Xbox Game Studios), Electronic Arts, Take-Two Interactive, Tencent Games, Activision Blizzard
Modifier guidance:
+ Increased investment in category (new publishing initiative launched, acquisition spree, expanded signing)- Pullback (studio closures, reduced signing activity, strategic shift away from category)Indicators:
Examples: Devolver Digital, Annapurna Interactive, Private Division, Paradox Interactive (publishing arm), Team17
Modifier guidance:
+ Growing (recent commercial hits, expanding catalogue, new funding round, increased market visibility)- Challenges (financial difficulties, key personnel departures, reduced signing pace, portfolio underperformance)Indicators:
Examples: Raw Fury, Fellow Traveller, Humble Games (historically), indie-focused venture capital funds, regional publishers with international ambitions
Modifier guidance:
+ Expanding (new funding secured, successful releases building reputation, team growth)- Uncertainty (financial constraints, reduced signing pace, portfolio concentration risk)Indicators:
Examples: Regional publishers, indie collectives with funding capacity, platform-specific fund programmes (e.g., ID@Xbox fund grants)
Modifier guidance:
+ Growing validation (successful releases from portfolio, expanding support capabilities)- Struggling to provide value (limited resources, portfolio challenges, market position unclear)Indicators:
Examples: Small indie labels, regional grant programmes, angel investors with games industry focus
Indicators:
Examples: Micro-grants, mentorship programmes, small angel investors, friends-and-family funding
Indicators:
Examples: Community support, small crowdfunding campaigns, friends and family contributions, micro-grants
Projects are rated by combining source ratings. These examples illustrate how different source combinations produce project ratings.
Disclaimer: Examples are illustrative only and subject to change based on updated information. Ratings reflect studio/publisher status at the time of the project, not current status. Examples may be time-bound (e.g., Obsidian’s capacity pre-Microsoft acquisition differs from post-acquisition).
| Studio Rating | Publisher Rating | Other Sources | Likely Project Rating (Capacity) | Independence | Example Scenario |
|---|---|---|---|---|---|
| AAA | AAA | Platform co-marketing | AAA | I3 | First-party AAA title: Sony Santa Monica + SIE publishing God of War Why: 200+ staff, full AAA infrastructure, platform holder backing, global marketing Independence: Owned by Sony (I3) |
| AA | AA | Regional grants | AA | I1-I2 | Mid-tier publisher + established studio: Paradox publishing Obsidian’s Pillars of Eternity (pre-Microsoft acquisition) Why: 80-150 staff at time of development, proven track record, AA publisher with strong PC distribution, regional grant support Independence: Publisher-backed but studio retained IP (I1), though later sequels more publisher-driven (I2) |
| A | None (self-publishing) | Epic MegaGrant ($250K) | A-/BBB+ | I0 | Capable studio self-publishing: Supergiant Games’ Hades Why: 30-50 staff, strong infrastructure from prior releases, self-funded with grant supplement, proven track record (Bastion, Transistor, Pyre) Independence: Self-published, studio owns IP, no publisher control (I0) |
| BBB | A | Kickstarter ($500K) | A-/BBB+ | I1 | Small studio with strong publisher: Obsidian + Paradox on Pillars of Eternity 1 Why: 15-25 person team at project start, experienced leads from AAA backgrounds, A-tier publisher (Paradox) providing distribution/marketing, community funding validation Independence: Studio retained IP, publisher provided support but not creative control (I1) |
| BB | BBB | Government grant ($100K) | BBB | I0-I1 | First-time team with regional publisher and grant support Why: 10-15 person team, first commercial release, regional publisher provides distribution, government grant covers partial development costs Independence: Depends on publisher deal structure (I0 if distribution-only, I1 if publisher has limited approval rights) |
| B | None | Patreon ($2K/month) | B+/BB- | I0 | Part-time solo dev with community support: most successful itch.io developers Why: 5-8 person part-time team, modest community funding, minimal formal infrastructure, learning/hobbyist origins transitioning to commercial Independence: Self-funded through community, no publisher (I0) |
| C | None | None | C | I0 | Solo hobbyist passion project with no external support Why: 1-3 person team, part-time/hobby development, no commercial track record, self-funded from personal savings Independence: Complete autonomy, no external dependencies (I0) |
Notes on examples:
Detailed combination methodology is explained in Section 7.
Independence tiers (I0–I3) provide a secondary classification of creative and financial autonomy, distinct from capacity ratings.
Definition: Complete creative and financial autonomy with no external publisher or major funder control.
Characteristics:
Verification indicators:
Examples:
Definition: Publisher provides funding and/or distribution, but studio retains creative control and often IP ownership.
Characteristics:
Boundary Rule: If the studio retains IP ownership (or co-ownership), the project is I1, regardless of publisher involvement level. IP ownership is the primary differentiator between I1 and I2.
Verification indicators:
Examples:
Definition: Publisher exercises significant creative control, often owns IP, and drives key design decisions.
Characteristics:
Boundary Rule: If the publisher owns the IP (or has exclusive commercial rights effectively equivalent to ownership), the project is I2, regardless of studio creative autonomy. IP ownership is the primary differentiator between I1 and I2.
Verification indicators:
Examples:
Definition: Studio is wholly owned by a platform holder or major publisher and operates within parent company structure.
Characteristics:
Verification indicators:
Examples:
Note on Dual Output Format:
When independence marker is used, projects receive dual codes:
The independence marker is OPTIONAL. Projects focused solely on capacity can use capacity rating alone (e.g., “A project” without independence code). Independence classification is most relevant for contexts where “indie” identity matters (awards, grants, player perception).
Quick Start: Want to see what you’d actually fill out? Skip to Section 7.8 Project Rating Form — it takes under 2 minutes. The methodology below explains how your answers become a rating.
Projects are rated through a three-step process:
This process can occur at any stage: concept, pre-production, production, or post-release.
flowchart TB
subgraph Sources["1. Source Ratings"]
Studio["Studio Source<br/>(Team size, infrastructure,<br/>track record, financial health)"]
Publisher["Publisher/Funder Source<br/>(Financial scale, market reach,<br/>support services, portfolio)"]
Other["Other Sources<br/>(Grants, platform support,<br/>co-dev partners, community funding)"]
end
subgraph Combination["2. Weighted Combination"]
Studio -->|50-60% weight| Calc["Weighted Score Calculation<br/>(AAA=95, AA=85, A=75, etc.)"]
Publisher -->|30-40% weight| Calc
Other -->|10-20% weight| Calc
end
subgraph Constraints["3. Apply Constraints"]
Calc --> Floor["Floor Constraint<br/>(Max +2 tiers above studio)"]
Floor --> Ceiling["Ceiling Constraint<br/>(Min -1 tier below highest source)"]
end
subgraph Output["4. Project Rating Output"]
Ceiling --> CapacityRating["Capacity Rating<br/>(AAA/AA/A/BBB/BB/B/C)"]
Ceiling --> IndependenceMarker["Independence Marker<br/>(I0/I1/I2/I3)<br/>[OPTIONAL]"]
CapacityRating --> FullCode["Full Code: Capacity/Independence<br/>(e.g., A/I0 or AAA/I3)"]
IndependenceMarker --> FullCode
end
subgraph PostLaunch["5. Post-Launch (Optional)"]
FullCode -.->|After release| OutcomeTracking["Outcome Metrics<br/>(R0-R4 revenue, player base,<br/>critical reception, growth signals)"]
OutcomeTracking -.->|Does NOT change<br/>original rating| OriginalRating["Original Rating Preserved"]
OutcomeTracking -.->|MAY influence| FutureProjects["Future Project Ratings<br/>(if studio capacity evolved)"]
end
style CapacityRating fill:#4CAF50,stroke:#2E7D32,color:#fff
style IndependenceMarker fill:#2196F3,stroke:#1565C0,color:#fff
style FullCode fill:#FF9800,stroke:#E65100,color:#fff
style OutcomeTracking fill:#9C27B0,stroke:#6A1B9A,color:#fff
Diagram Legend:
Flow Description (for environments without Mermaid rendering):
The GPC rating process follows six steps:
Studios provide information about:
Publishers/Funders provide information about:
Other Sources (grants, platform support, community funding):
All inputs use bracketed ranges, not exact figures. No confidential financial disclosure is required.
Once sources are independently rated, they are combined to produce a project rating. GPCS uses a weighted floor-and-ceiling model that recognises both studio capacity constraints and external resource elevation.
Core principle: Studio capacity sets a practical floor; external resources can elevate but not infinitely.
Weighting:
Rule: Other Sources (grants, platform support, community funding, technology partnerships) are capped at 20% maximum weight in the combination formula, UNLESS they represent production capacity equivalent to >30% of the studio’s headcount.
Exception for Co-Development: If a co-development partner contributes significant production capacity (e.g., art outsourcing studio handling 40% of asset production, or full co-dev partner providing 20+ additional headcount), that co-dev source may be weighted higher than 20%. In these cases:
Rationale: Grants and platform support provide financial/resource elevation but do not directly execute production work. Co-development partners, by contrast, are production capacity and warrant higher influence on project rating.
Example:
Calculation:
The second approach more accurately reflects the project’s actual production capacity.
Publisher/funder contribution is determined by blending development funding commitment (Q7) with the service bundle and scope (Q10). These are facts studios actually know (and can share without breaking NDAs).
| Support Intensity | Funding Signal (Q7) | Service Signal (Q10) | Resulting Contribution Behaviour |
|---|---|---|---|
| Light Touch | No development funding (services/marketing only) | ≤1 service, all marked Basic scope | Treated as support-only. Contribution tier reflects limited services and no ceiling is applied. |
| Standard Support | Partial funding or milestone-based co-funding | 2–3 core services at least Standard scope (e.g., marketing + QA + localisation) | Contribution tier elevated. Ceiling enforced at Contribution Tier − 2 tiers. |
| Full Support | Full development funding committed | ≥4 services with at least one service marked Extensive scope (embedded QA, AAA-scale marketing, live ops, etc.) | Contribution tier reflects full publisher weight. Ceiling enforced at Contribution Tier − 1 tier. |
Service scope capture: For each service ticked in Q10, projects indicate whether the publisher provides Basic (advisory/minimal), Standard (typical support), or Extensive (embedded/enterprise-level) delivery. This produces enough structure to classify support intensity without requesting exact dollar figures.
Decision logic:
Projects re-submit the questionnaire whenever funding/services change so the contribution tier and support intensity stay accurate throughout development.
Calculation:
Example:
Floor constraint: Project rating cannot exceed studio rating by more than 2 full tiers, even with exceptional external support. A C-rated studio cannot produce a AAA-rated project; infrastructure constraints are real.
Ceiling constraint (support-weighted): The project rating cannot fall below the highest Committed Contribution Tier adjusted by the support-intensity rules above. The contribution tier reflects the support actually committed to this project (funding + services + scope), not the publisher’s theoretical maximum capacity. If contribution increases materially (new funding, expanded co-development, major additional services), the project may be re-rated at that milestone.
| Support Intensity | Example Deal Pattern | Ceiling Behaviour |
|---|---|---|
| Light Touch | Marketing/distribution only, advisory services, no dev funding | No ceiling (project relies on weighted score + studio floor) |
| Standard Support | Partial funding plus QA + marketing, or 2–3 services at Standard scope | Ceiling at Contribution Tier − 2 tiers (AA contribution → ≥BBB) |
| Full Support | Full funding + ≥4 services with at least one Extensive delivery (embedded QA, full live ops, AAA marketing) | Ceiling at Contribution Tier − 1 tier (AAA contribution → ≥AA-) |
Worked scenario (8-person BB studio + AAA publisher):
The numerical mappings and constraints specified above represent Version 0.6 defaults pending industry calibration and real-world testing (see Section 7.10). These values were selected based on:
Tier score assignments (AAA=95, AA=85, A=75, etc.):
Weight ratios (Studio 50-60%, Publisher 30-40%, Other 10-20%):
Constraint thresholds (floor +2 tiers, ceiling -1 tier):
Version 0.6 Status: These constants will be validated through the structured calibration plan outlined in Section 7.10, which includes:
The methodology is designed to be adjusted without invalidating existing ratings. Version changes will be documented and historical ratings preserved with their original methodology version noted.
The standard weighting model assumes the rated studio performs the majority of development internally. When significant external production capacity is involved, adjustments may be required:
Scenario 1: Small studio + large co-development partner
Scenario 2: Publisher-owned support studio handling most production
Scenario 3: Outsourcing-heavy production (art, audio, VFX specialists)
Implementation Note: Co-development and outsourcing adjustments will be formalised in future versions based on industry feedback. Version 0.6 recognises these cases exist but does not prescribe exact handling. Projects with significant co-dev relationships should disclose this context in their rating submission for appropriate classification.
Considered alternatives:
The floor-and-ceiling model balances these concerns, reflecting industry reality: studios need capacity to execute, but strong external support enables projects beyond solo studio capability.
Why these specific numbers? The constants chosen reflect a hypothesis about industry structure that requires validation. Alternative values (e.g., tighter floor constraint of +1 tier, different weight ratios) may prove more accurate as adoption data accumulates. The framework prioritises transparency about these assumptions rather than claiming false precision.
Self-publishing: When no publisher is involved, studio rating heavily influences project rating (70% weight). Other sources (grants, community funding, platform support) fill the remaining 30%.
Multiple funders: When multiple publishers or funders support a project, their ratings are averaged before applying the weighting model.
Modifiers (+/-): Applied when sources are at transition points (e.g., studio actively hiring, publisher experiencing growth). Modifiers adjust final rating by approximately half a tier.
Genre considerations: Rating reflects resources and capacity, not scope ambition. A narrative walking simulator and an open-world RPG can share a rating if resource backing is equivalent.
| Required | Optional |
|---|---|
| Studio information (size, structure, track record) | Project website or Steam page |
| Publisher/funder information (if applicable) | Team member LinkedIn profiles |
| Other source information (grants, platform support) | Press kit |
| Intended platform(s) and release window | Development blog or social media |
GPCS uses a self-certification model. Studios and publishers select the brackets that best describe their project, and that selection becomes the basis for classification. We trust companies to represent themselves accurately.
This is: A self-declaration of your project’s production scale and backing, using broad brackets rather than exact figures.
This is not: An audit, due diligence, or financial disclosure. We don’t ask for contracts, budgets, or sensitive business information.
Think of it like declaring your weight class before a competition — you pick the category that fits, and you compete in that category.
| We Store | We Don’t Store |
|---|---|
| Bracket selection (e.g., “30-80 people, $5M-$30M”) | Exact figures (e.g., “$12.7M budget, 47 FTE”) |
| Source names if disclosed (e.g., “Publisher: Devolver”) | Deal terms, revenue splits, contract details |
| Independence level (e.g., “Mostly independent”) | Legal agreements or ownership documents |
| Verification tier selected | Confidential financial records |
Your exact budget, team composition, and deal terms remain private. The certificate displays your bracket classification, not your financials.
To reduce friction, GPCS maintains a registry of pre-rated sources. When a known publisher, studio, or funding body is involved, their rating is already established — you simply select them rather than re-answering detailed questions.
Pre-rated sources include:
For new or unlisted entities: A more detailed form captures the information needed to establish their registry entry. Once rated, they’re available for future projects.
The core form requires four questions. Most projects can be classified in under two minutes.
1. Project Scale
Which bracket best describes this project’s production scale?
| Bracket | Team Size | Typical Budget | Rating |
|---|---|---|---|
| Solo / Micro | 1–4 people | <$250K | C |
| Small | 5–15 people | $250K–$1M | B/BB |
| Growing | 15–30 people | $1M–$5M | BBB |
| Established | 30–80 people | $5M–$30M | A |
| Large | 80–200 people | $30M–$100M | AA |
| Major | 200+ people | $100M+ | AAA |
Select the bracket that best fits. If team size and budget suggest different brackets, select the higher of the two.
2. External Support
Did this project receive external funding or publishing support?
For known publishers/funders, select from the registry and their pre-rated values auto-populate. For unlisted sources, provide bracket-level details.
If publisher/funder selected:
What level of support did they provide for this project?
This is project-specific. The same publisher may offer different deals to different projects.
3. Creative Independence
How would you describe your creative independence on this project?
| Selection | Description | Independence Marker |
|---|---|---|
| Fully independent | We own the IP and make all creative and business decisions | I0 |
| Mostly independent | External funding/publishing, but we retain creative control and own or co-own IP | I1 |
| Collaborative | Shared decision-making with publisher/funder; they have approval rights on key decisions | I2 |
| Publisher-led | Publisher owns IP and/or drives major creative and release decisions | I3 |
4. Additional Sources (Optional)
List any other sources that contributed to this project:
Select from registry or add new. These are factored as supplementary sources in the rating calculation.
When a studio or publisher isn’t in the registry, additional questions establish their profile. This only needs to be completed once — subsequent projects can reference the existing entry.
Studio Profile Questions:
Publisher Profile Questions:
These questions use the same bracket-selection approach — no exact figures required.
Project: “Starlight Venture”
| Question | Response | Rating Contribution |
|---|---|---|
| Project Scale | Established (30-80 people, $5M-$30M) | A (75 points) |
| External Support | Yes — Publisher: Devolver Digital | AA (85 points, from registry) |
| Support Level | Partial funding | Weighted at 35% |
| Independence | Mostly independent | I1 |
| Additional Sources | Epic MegaGrant ($250K bracket) | BBB (65 points) |
Calculation:
Final Output: A/I1 — Self-Certified — v0.6
Certificate displays:
For reference, the numerical score mappings used in calculations:
| Letter Grade | Score Range | Midpoint |
|---|---|---|
| AAA | 90–100 | 95 |
| AA+ | 87.5–90 | 88.75 |
| AA | 82.5–87.5 | 85 |
| AA- | 80–82.5 | 81.25 |
| A+ | 77.5–80 | 78.75 |
| A | 72.5–77.5 | 75 |
| A- | 70–72.5 | 71.25 |
| BBB+ | 67.5–70 | 68.75 |
| BBB | 62.5–67.5 | 65 |
| BBB- | 60–62.5 | 61.25 |
| BB+ | 57.5–60 | 58.75 |
| BB | 52.5–57.5 | 55 |
| BB- | 50–52.5 | 51.25 |
| B+ | 47.5–50 | 48.75 |
| B | 42.5–47.5 | 45 |
| B- | 40–42.5 | 41.25 |
| C | 0–40 | 30 |
Note: These are Version 0.6 defaults pending industry calibration and real-world testing (see Section 7.10 for detailed calibration plan). The methodology is designed to be adjusted based on practical validation and stakeholder feedback.
GPC ratings can be verified at three distinct levels, each providing a different degree of confidence in the accuracy of self-reported information. Verification is optional but recommended for projects seeking maximum credibility.
Definition: Self-reported project information with no external validation.
Process:
Appropriate for:
Limitations:
Definition: Self-reported information supported by non-sensitive public evidence.
Process:
Appropriate for:
Evidence Strength (Verified ratings only):
Verified ratings include an Evidence Strength indicator (Strong / Moderate / Limited) that reflects the quality and recency of public evidence reviewed. This does not change the rating outcome; it communicates confidence in the verification basis.
Limitations:
Evidence Disclosure Policy:
If a studio or publisher chooses not to provide evidence for specific claims, those claims have two options:
This approach prevents “confidentiality as a credibility weapon” — projects cannot claim Verified status while withholding key evidence. Partial verification is transparent: publicly evidenced claims are marked Verified; undisclosed claims are marked accordingly.
What happens when claims are false:
Definition: Self-reported information validated by independent third-party auditor with access to confidential materials.
Process:
Appropriate for:
Business model opportunity:
What happens when claims are false:
To discourage “rating inflation” and showcase good-faith adoption patterns, stakeholders have proposed tying minimum verification levels to the resulting capacity rating:
Live-service renewal: Projects operating as SaaS/evergreen services (ongoing monetisation, continuous content drops) should reaffirm their verification status on a 12-month cadence:
These practices are not core requirements yet; they serve as implementation guidance for registries, awards bodies, and grant makers looking to prevent stale badges and detect structural shifts early. Feedback on the cadence, fee expectations, and enforcement model is welcome from stakeholders.
| Aspect | Unverified | Verified | Audited |
|---|---|---|---|
| Evidence required | None | Public/semi-public only | Confidential materials reviewed |
| Evidence Strength displayed | No (self-report) | Yes (●●●/●●○/●○○) | Implicitly Strong (or shown as “Strong (Audited)”) |
| Who reviews | No one | GPCS registry (spot-checking) | Independent third-party auditor |
| Financials disclosed | No | No | Yes, to auditor only (not public) |
| Credibility level | Low | Moderate | High |
| Cost | Free | Free | Audit fee (paid to auditor) |
| Time required | <10 minutes | Days to weeks | Weeks to months |
| False claim consequences | Rating removed | Rating removed + flag + ban | Rating removed + flag + ban + auditor liability |
The GPCS framework defines WHAT verification tiers are and WHY they matter. Operational details (how auditors are accredited, pricing structures, certification bodies) will be defined in a separate implementation guide as the framework matures.
Future considerations:
Key principle: Audited tier enables confidential disclosure to trusted third parties without public financial exposure, balancing transparency with business confidentiality.
Public Badge Display:
┌─────────────────────────────────┐
│ GPCS CAPACITY RATING │
│ │
│ A / I1 │
│ Verified ●●○ • Release │
│ v0.6 │
│ │
│ [QR Code: Verify at │
│ gpcs.org/verify/01JH8M3K] │
└─────────────────────────────────┘
Full Certificate Breakdown (Accessed via QR Code or Registry):
Implementation Note: Actual badge designs, QR code infrastructure, and registry interface are planned for post-v1.0 implementation guides.
The numeric weights, tier score ranges, and floor/ceiling constraints specified in Section 7.3 are Version 0.6 defaults representing initial hypotheses. These will be validated and refined through structured calibration before the v1.0 standard release.
Test Dataset:
Validation Approach:
Governance Rules:
Transparency Commitment:
Timeline:
This calibration plan represents our initial approach and is explicitly framed as a test bed subject to stakeholder scrutiny and adjustment. Before executing the calibration exercise, the methodology will be shared with potential adopters (platforms, awards bodies, grant programmes, publishers) to incorporate their requirements and ensure the validation approach meets real-world needs.
The goal is not perfection on first attempt, but a transparent, improvable process that builds credibility through demonstrated rigour and openness to refinement.
GPCS is designed primarily as professional infrastructure for B2B transactions requiring precise, verifiable classification. The framework serves stakeholders making operational decisions (grant allocation, awards adjudication, publisher due diligence, platform partner segmentation) where resource context matters materially. Consumer-facing applications are possible but secondary.
Problem: Government and institutional grant programmes struggle with unclear eligibility criteria. Terms like “emerging developer” or “independent studio” lack precise definitions, leading to mismatched applicants, difficult adjudication decisions, and grants reaching well-funded teams instead of intended recipients.
Solution: Define grant eligibility using GPCS source ratings with transparent thresholds:
Why this works: Structural criteria (team size, infrastructure, funding sources) are verifiable through public evidence or simple audits. Applicants complete a standardised questionnaire instead of lengthy financial disclosure. Adjudicators evaluate against clear thresholds instead of subjective assessments.
Implementation pathway: Grant bodies can pilot GPCS eligibility criteria for a single funding round, measuring whether the framework improves targeting precision, reduces application burden, and simplifies adjudication compared to traditional approaches.
Benefit: Clear, verifiable eligibility that directs public funding to intended recipients whilst protecting applicant confidentiality.
Problem: “Best Indie Game” categories pit solo developer passion projects against well-funded 40-person studio productions, creating unfair competition and community backlash. Awards bodies struggle to define meaningful categories without transparent criteria.
Solution: Create award categories by GPC project rating tiers:
Why this works: Projects compete against peers with similar resource contexts. A solo developer rated C competes against other C-rated projects, not against an A-rated 30-person studio with publisher backing. Categories reflect verifiable structural criteria, not subjective cultural identity.
Implementation pathway: Awards programmes can pilot tier-based categories for a single awards cycle, evaluating whether entrants, judges, and community perceive the categories as fair and meaningful.
Benefit: Fair competition within peer groups, reduced category controversy, and celebration of excellence across production scales.
Problem: Publishers and investors conduct extensive due diligence on potential partners and portfolio candidates. Each evaluation requires bespoke analysis of team capacity, financial backing, production infrastructure, and growth trajectory. This process is time-intensive and inconsistent across deals.
Solution: GPC ratings provide standardised due diligence infrastructure:
Why this works: Publishers and investors already perform capacity assessment as part of deal evaluation. GPCS standardises this analysis into a reusable framework, enabling faster initial filtering and comparative analysis across prospects.
Implementation pathway: Publishers or investors can use GPCS internally for portfolio analysis, scouting filters, or pre-screening before full due diligence.
Benefit: Structured discovery based on transparent criteria. Faster initial assessment, meaningful peer comparison, and reduced due diligence overhead for obvious mismatches.
Problem: Platform holders (console manufacturers, storefront operators, engine providers) support thousands of developer partners with differentiated programmes. Segmentation decisions are often ad-hoc or based on retroactive success metrics, missing opportunities to provide appropriate support during development.
Solution: Use GPC ratings for differentiated partner support:
Why this works: Partner segmentation already exists informally. GPCS formalises the criteria, making segmentation transparent and defensible. Partners understand where they fit and what support to expect.
Implementation pathway: Platforms can pilot GPCS segmentation internally for developer programme management, testing whether the tiers align with operational needs and partner expectations.
Benefit: Appropriate support allocation based on project scale, transparent segmentation criteria, and efficient resource distribution across partner base.
Problem: Industry analysts, researchers, and journalists use inconsistent definitions when segmenting the market, making trend analysis and cross-source comparison impossible.
Solution: Standardised GPC project ratings enable:
Benefit: “What percentage of C-rated projects achieve R2+ revenue outcomes?” becomes an answerable research question. Market reports can track how rating tiers correlate with commercial outcomes, development timelines, and platform strategies.
Caveat: This requires critical mass adoption. With 50 rated projects, the dataset is too small for meaningful analysis. With 5,000 rated projects, tier-based trend analysis becomes viable.
Problem: Studios struggle to position their projects accurately for press, platforms, and partners. “Indie” is both overused and meaningless; “AA” is subjective.
Solution: GPC project ratings provide:
Benefit: A studio can say “This is an A-rated project” and stakeholders immediately understand the resource backing and production context, without needing detailed budget disclosure or subjective labelling debates.
Caveat: This depends on industry recognition. If GPCS is unknown, the rating communicates nothing. Positioning value grows with adoption.
Problem: The informal “AAA” label has become so overused that it risks losing meaning. When every major release is marketed as “AAA quality”—or even “AAAA” to signal something beyond AAA—the terminology inflates endlessly without conveying useful information. Publishers with genuinely high-tier productions cannot differentiate themselves from competitors making unsubstantiated claims.
Key insight: GPCS operates as B2B infrastructure, separate from consumer marketing. Publishers retain full control over how they market to audiences—”AAA experience,” “indie gem,” or any other terminology. GPCS provides the verifiable backing behind those claims when stakeholders need evidence.
What GPCS provides:
How this works in practice:
A publisher’s marketing team promotes their new release as a “groundbreaking AAA experience.” Separately, the project holds a verified AAA/I3 GPC rating. The marketing language speaks to consumers; the GPC rating speaks to awards bodies evaluating category placement, platform partners allocating support, and press seeking accurate context. Neither constrains the other.
Benefit: Publishers gain a credibility tool that supports rather than limits their marketing goals. Verified ratings provide substance behind aspirational claims, whilst informal marketing language remains unconstrained.
Problem: Digital storefronts struggle to curate and surface games appropriately. Algorithmic recommendations and editorial features lack context about production scale.
Potential solution: Platforms could integrate GPC ratings into:
Why this is speculative: Consumer-facing use requires:
Testing pathway: This use case should be tested only after concentrated adoption in professional contexts (grants, awards, platform partnerships). Consumer-facing integration may prove unnecessary or counterproductive.
Alternative framing: If consumer-facing use proves viable, platforms might translate tiers into simplified language (e.g., “Small Team Production”, “Mid-Size Studio”, “Major Studio”) whilst retaining precise tier classification in backend systems.
Summary: GPCS is optimised for professional stakeholders making operational decisions based on resource context. The framework may prove most valuable as back-office infrastructure for grants, awards, and partner management, rather than as consumer-facing labelling. Testing will determine which use cases provide genuine value.
This section examines existing approaches to studio and project classification within gaming and adjacent creative industries, providing context for the GPCS framework’s design decisions.
The game industry’s lack of standardised classification stands in contrast to other creative sectors:
Film Industry: Motion pictures are classified by budget tiers with formally defined thresholds through SAG-AFTRA agreements: Ultra Low Budget (under $300K), Moderate Low Budget ($300K–$700K), Low Budget ($700K–$2M), and Basic Theatrical (over $2M) (SAG-AFTRA, 2024). The Motion Picture Association provides content ratings, while production context is communicated through distributor relationships and marketing positioning.
Music Industry: The distinction between major labels, independent labels, and self-released artists is well-established. The Association of Independent Music (AIM) formally defines a “major” as a multinational with over 5% world market share, with independents defined as labels not majority-owned by Sony, Warner, or Universal (AIM/IMPALA).
Publishing Industry: Book publishing distinguishes between the “Big Five” publishers (Penguin Random House, HarperCollins, Simon & Schuster, Hachette, Macmillan), mid-size houses, small presses, and self-publishing. The Big Five control approximately 80% of the US trade market and generate 64% of industry revenue (WordsRated, 2022).
Several organisations have attempted partial solutions to classification:
Platform-Specific Programmes: Steam, PlayStation, Xbox, and Nintendo each maintain developer programmes, though these are largely unified rather than tiered. Steam Direct charges a flat $100 fee regardless of studio size; ID@Xbox provides two free dev kits to all approved developers; PlayStation Partners and Nintendo Developer Portal operate on case-by-case support allocation rather than formal classification tiers.
Trade Organisations: Bodies such as IGDA (International Game Developers Association) and regional equivalents provide community support but have not established formal classification standards. IGDA’s Developer Satisfaction Survey defines indie developers simply as “any entity that is independently owned, irrespective of external investments or industry ties” (IGDA DSS).
Awards Bodies: The Game Awards, BAFTA Games, and similar organisations have experimented with categories like “Best Independent Game” but apply inconsistent eligibility criteria. The Game Awards publishes no formal definition; BAFTA’s rulebook states only that “subsidiaries owned by established studios are not generally eligible… but may be eligible should they be found to be within the spirit of the award” (BAFTA, 2025).
Market Research Firms: Newzoo, Sensor Tower, and similar analysts use varying definitions when segmenting the market. Newzoo categorises by price point (indie: ≤$30, AA: $31–50, AAA: $51+), while Gamalytic uses lifetime Steam revenue (indie: $10K–$50M, AA: $50M–$500M, AAA: $500M+)—fundamentally different approaches yielding incompatible classifications.
Limited formal research exists on game project classification and studio segmentation:
The most significant recent attempt at game classification is the HushCrasher Classification System (HCS 1.0), developed by Antoine Mayerowitz and Julie Belzanne and published in 2024-2025 (Mayerowitz & Belzanne, 2025).
Methodology: HCS uses machine learning clustering on Steam data since 2006, enhanced with Mobygames credits data. The system classifies games based on two primary metrics:
Categories: HCS proposes four tiers: Kei (solo/tiny teams), Midi (small-medium studios), AA, and AAA.
Key Findings: HCS analysis revealed that small-scale games (“Kei”) now represent 75% of 2024 releases, a 16-fold increase since 2017. Despite this market flooding, these games maintain a stable ~25% revenue share. Median per-game revenue collapsed 97% between 2012-2018.
Limitations for Project Rating: While HCS provides valuable market analysis, it addresses a different problem than GPCS:
| Dimension | HCS | GPCS |
|---|---|---|
| Classifies | Games (products) | Projects (with source breakdown) |
| Timing | Retrospective (post-release only) | Prospective (any development stage) |
| Primary metrics | Credits + file size | Studio capacity, publisher/funder resources, support structure |
| Funding visibility | None | Source ratings and general scale |
| Publisher/funder role | Not considered | Core rating dimension |
| Self-classification | No (data-derived) | Yes (voluntary) |
| Rating nomenclature | New terminology (Kei, Midi) | Industry-familiar (AAA/AA/A/BBB/BB/B/C) |
| Bond-style ratings | No | Yes (borrows from financial credit rating structure) |
HCS cannot rate a project before release, cannot distinguish between a bootstrapped solo developer and one with $2M in venture funding plus AA publisher backing, and introduces new terminology that may face adoption resistance.
Complementary Approaches: GPCS and HCS solve different problems and could coexist. GPCS rates the project’s resource backing and source structure; HCS classifies the output’s production scope based on credits and file size. A BBB-rated project by GPCS might produce a Midi-scale game by HCS metrics, indicating the team punched above their weight in execution.
Current approaches share common limitations:
| Approach | Limitation |
|---|---|
| Platform programmes | Proprietary, platform-specific, not transferable |
| Trade organisations | Community-focused, not classification-focused |
| Awards bodies | Ad-hoc definitions, inconsistent year-to-year |
| Market research | Internal methodologies, not publicly standardised |
The GPCS addresses these gaps by providing an open, platform-agnostic, multi-dimensional framework with transparent criteria.
The GPCS is built on these non-negotiable principles:
Rating is opt-in. Projects choose to disclose their sources and receive a rating. No project is rated without consent from the primary stakeholders (studio and publisher/funder, if applicable).
Projects never need to reveal exact budgets, revenue figures, or confidential business agreements. All inputs use safe, bracketed ranges and structural indicators.
The rating methodology is fully public. Any project can understand exactly how it would be rated before participating. Source tier definitions and combination formulas are openly documented.
Ratings are descriptive, not hierarchical. A C-rated project is not inferior to an AAA-rated project; they represent different resource contexts. The framework avoids value judgements about artistic merit, commercial potential, or quality.
Ratings are not permanent. Projects can be re-rated at major milestones (funding rounds, publisher signings, release). Historical ratings are preserved to track evolution and growth over time.
Projects may receive multiple ratings throughout their development lifecycle (Concept, Production, Release, Live). The following rules govern which rating is canonical:
Ratings are snapshots: Each rating has a timestamp, version indicator, and development stage. All ratings are preserved in the registry.
Release Rating is canonical: The rating generated at or closest to the public release date serves as the canonical rating for public reference, award eligibility, and category placement. This reflects the project’s actual production circumstances at launch.
Post-release ratings: Projects may receive updated ratings post-release (e.g., after major expansions, funding rounds, or structural changes). These exist as “Current Rating” but do not overwrite the Release Rating. Both are displayed in the registry with clear stage labels.
Badge display: Ratings include a Stage indicator (Concept / Production / Release / Live) to clarify which development phase the rating represents. Example: A/I1 • Verified ●●○ • Release • v0.6 • ID: 01JH8M3KQ4
Practical implication: A project rated BBB during production that secures additional funding and is re-rated A at release will use the A (Release) rating for public purposes. The BBB (Production) rating remains in the historical record.
Projects may receive support from sources that exit before the 1.0 release. GPCS handles this through the Development History Model, which ensures transparency about how projects were actually made whilst accurately reflecting their release state.
Core principles:
Example:
A project receives $2M funding from Publisher A during 2022-2024 development. The publisher relationship ends. The studio self-publishes at 1.0 in 2025.
Certificate shows:
The Release rating (A/I0) is used for awards eligibility, public display, and category placement. The Development History provides full transparency about resources that enabled the project.
Why this matters:
Note on evolving ratings: The same studio can have different independence ratings across projects, or even across a single project’s lifecycle. A project that started with publisher involvement (I1) but released self-published (I0) has the canonical Release rating of I0 — the rating reflects the state at launch, not development history. Both states are documented; only the Release rating is canonical.
Studios are rated as sources, not as entities with fixed classifications. The same studio can have multiple projects with different ratings simultaneously, reflecting variable resource commitment and strategic priorities.
The framework works across regions. Rating reflects structural capacity and resource backing, not geographic cost differentials.
Core principle: GPCS measures capacity (infrastructure, headcount, track record), not burn-rate equivalence or purchasing power parity.
What this means in practice:
A 30-person studio in Poland and a 30-person studio in California are both rated based on:
They are NOT differentiated based on:
Justification:
GPCS is a production capacity framework, not a financial equivalence model. The rating answers: “What resources and infrastructure does this project have access to?” not “What would this budget buy in San Francisco?”
Example:
Both studios are likely rated BBB or A based on team size and infrastructure, despite the budget difference. The higher cost of living in San Francisco does not reduce Studio B’s rating, nor does Poland’s lower cost elevate Studio A’s rating. Both have similar structural capacity to execute.
Why no cost-of-living adjustment?
Capacity is structural: A 30-person team can produce a similar scope of content regardless of geographic salary costs. The team’s coordination overhead, communication complexity, and production challenges are similar.
Avoids unverifiable claims: Cost-of-living adjustments require detailed financial disclosure (exact salaries, regional multipliers) that conflicts with GPCS’s principle of non-sensitive disclosure.
Simplicity and transparency: Geographic multipliers add complexity and introduce subjective regional classifications. Which tier is Thailand? Argentina? Portugal? The framework would become unmanageably complex.
Globalisation reality: Many studios are distributed across regions, employ remote contractors from multiple countries, and outsource to lower-cost partners. Geographic cost adjustment becomes meaningless in this context.
Limitation acknowledged:
This approach will face criticism. A $1M budget in Warsaw objectively produces more output than a $1M budget in San Francisco due to salary cost differentials. GPCS explicitly does not attempt to normalize for this. The framework describes structural capacity and resource backing, not output efficiency per dollar.
Stakeholders should interpret ratings accordingly:
Future consideration:
If the framework achieves widespread adoption and stakeholders request regional cost indexing, this could be revisited in a future version. Current priority is structural simplicity and global consistency.
This framework acknowledges four significant adoption barriers that will affect its viability. Honesty about these challenges is essential for realistic implementation planning.
The Problem: GPCS proposes seven capacity tiers (AAA, AA, A, BBB, BB, B, C) with optional modifiers, plus four independence markers (I0-I3), plus verification levels, plus lifecycle stages. This is substantially more complex than the existing informal three-tier system (Indie, AA, AAA).
Why it matters: Complexity creates adoption friction. Stakeholders must learn the system, explain it to others, and justify the overhead. Simple systems spread faster, even if they are less accurate.
Mitigation:
Realistic assessment: This framework will not replace casual use of “indie” in everyday discourse, nor should it. The goal is to provide infrastructure for professional contexts where precision matters, not to police informal language.
The Problem: Studios must invest time to complete the classification questionnaire and potentially disclose information about team size, funding sources, and publisher relationships. What incentive do they have to participate?
Why it matters: Voluntary standards fail if stakeholders see no immediate benefit. GPCS cannot succeed if projects simply ignore it.
Mitigation:
Realistic assessment: Initial adoption will be driven by specific programme requirements, not organic grassroots uptake. This is acceptable. Standards often begin in concentrated contexts before expanding.
The Problem: Even bracketed ranges (e.g., “30-80 staff”, “$1M-$5M budget”) reveal competitive intelligence. Publishers may not want competitors to know deal structures. Studios may not want to disclose funding situations.
Why it matters: If projects cannot participate without exposing sensitive information, they will opt out entirely.
Mitigation:
Realistic assessment: Some projects will still opt out due to confidentiality concerns. This is acceptable. The framework serves those willing to participate; it does not require universal participation to be valuable.
The Problem: Standards require critical mass to be useful. If only 50 projects have GPC ratings, the framework provides limited value. If 5,000 projects are rated, it becomes essential infrastructure.
Why it matters: Early adopters bear costs (learning the system, completing questionnaires) whilst receiving minimal benefit (few peers to compare against, limited industry recognition). This creates a chicken-and-egg problem.
Mitigation:
Realistic assessment: Achieving network effects will take years, not months. Version 0.6 is designed for concentrated pilot implementation, not industry-wide transformation. Success means 500-1,000 projects rated within specific programmes, not tens of thousands across the entire industry.
Honest possibility: The framework may prove more valuable as back-office infrastructure for professional transactions (grant eligibility, publisher due diligence, awards adjudication, platform partner segmentation) than as a consumer-facing classification system visible on store pages and marketing materials.
Why this matters: If GPCS becomes standard for grants and awards but never achieves mass public recognition, that is still a successful outcome. The value lies in providing precise, verifiable classification for stakeholders who need it, not in replacing casual discourse or consumer-facing labels.
Testing will determine optimal use cases. Version 0.6 is designed to discover where the framework provides most value, not to prescribe a single adoption path.
Real-world implementation is essential to validate the assumptions underlying GPCS. This section outlines the testing approach, expected pilot contexts, and commitment to iteration based on evidence.
Core principle: No classification framework is correct in theory alone. The methodology must be tested against real projects, real stakeholder needs, and real operational constraints.
What testing will validate:
What testing will surface:
Context: Implement GPCS as the classification system for an awards programme, requiring all entrants to classify their projects.
What this tests:
Expected outcomes:
Timeline: 6-12 months from launch to post-awards analysis
Context: Partner with a grant programme to use GPCS source ratings for eligibility criteria (e.g., “eligible: C/B/BB studios only, ineligible: projects with A+ publisher backing”).
What this tests:
Expected outcomes:
Timeline: 12-18 months, aligned with grant programme cycles
Context: Use GPCS ratings internally within a platform partner programme to segment developers for differentiated support (e.g., C/B projects receive onboarding support, A/AA projects receive advanced technical resources).
What this tests:
Expected outcomes:
Timeline: 18-24 months, contingent on partner interest
Quantitative data:
Qualitative feedback:
Publication commitment:
Version 0.6 is explicitly experimental. The framework will evolve based on implementation evidence.
Expected adjustments:
Transparency principle: All methodology changes will be documented publicly with rationale, supporting data, and impact analysis (how many existing ratings would change?).
Backward compatibility: Ratings issued under v0.5 will remain valid and grandfathered. Projects can optionally re-rate under future versions. See Section 12.3 for details.
Version 1.0 will be released when the following conditions are met:
Minimum testing threshold:
Methodology stability:
Stakeholder validation:
Governance readiness:
Timeline: Expected 18-36 months from v0.6 publication to v1.0 release.
The author welcomes:
Contact information: https://koldfu5ion.github.io/gpcs/
Community contribution: GPCS development will be conducted openly. Feedback, proposals, and pilot data will be shared publicly (with appropriate anonymisation) to enable community review and collaborative refinement.
Standards are not granted authority; they earn it through adoption. The ESRB (Entertainment Software Rating Board) did not begin with legal authority. It was created by the industry to self-regulate, and became authoritative because retailers and platform holders chose to require it. Similarly, PEGI (Pan European Game Information) became the de facto European standard not through legislation, but through voluntary adoption by industry stakeholders who found it useful.
GPCS follows this model. The framework becomes valuable when projects self-rate, when awards bodies use GPC ratings for categories, when grant programmes reference GPCS for eligibility, and when publishers and platforms filter by GPC ratings. Each adoption makes the next one easier, creating network effects that compound over time.
The GPCS is maintained by its creator as an open standard. The framework documentation is publicly available, and feedback is welcomed.
The framework will evolve as the industry changes. This section outlines the concrete mechanics for proposing, reviewing, and implementing changes.
Who can propose changes:
What can be proposed:
Proposal requirements:
Quarterly review cycles (Q1, Q2, Q3, Q4):
Annual major release (January each year):
Projects may dispute ratings or verification decisions within 30 days of issuance by submitting additional evidence to the GPCS registry. Each dispute is reviewed by an independent triad: one registry representative plus two external experts with no affiliation to the project or original auditor. The panel can uphold, adjust, or revoke the rating and may request additional verification or a new audit. Decisions are communicated within 30 days and are binding; frivolous or repetitive disputes without new evidence may result in temporary submission restrictions. Registry entries display a “Dispute Pending” flag while a case is under review.
Version 0.6 governance:
Transition criteria to multi-stakeholder governance:
When advisory board is established:
Board composition:
Decision thresholds:
Conflict of interest rules:
When methodology changes:
Option A: Grandfathered ratings (preferred for early versions)
Option B: Universal re-rating (used sparingly)
Version numbering:
Public documentation:
Change categories:
New content:
Clarifications:
No methodology changes: Tier definitions, scoring constants, and combination formulas remain unchanged from v0.5.
Contributors: Guy Cunis (publisher incentives feedback), Maxim Somoleynko (terminology inflation observations), Sami Hayfal (adoption challenge analysis)
Version 0.5.0 - July 2026
Major changes:
Minor changes:
Impact:
Rationale:
As adoption grows, governance may evolve to include:
The priority is adoption first, formalised governance second. Governance overhead should match adoption scale.
GPCS is designed as infrastructure for stakeholders who need precise, verifiable classification. The framework is not a finished standard demanding immediate adoption, but rather a comprehensive proposal inviting experimental implementation, testing, and feedback.
Interested stakeholders are invited to:
Awards and showcases: Consider using GPC rating tiers for competition categories in a single awards cycle. Test whether tier-based categories (C/B, BB/BBB, A, AA/AAA) produce fairer competition and reduce mismatched entries. Evaluate developer and community response.
Grant programmes and funding bodies: Consider using GPCS source ratings to define eligibility criteria for one funding round. Test whether the framework reduces application burden, improves targeting precision, and simplifies adjudication compared to traditional financial disclosure.
Platforms and publishers: Consider using GPC ratings internally for developer programme segmentation, portfolio benchmarking, or scouting filters. Test whether the classifications align with your operational needs and provide actionable insights.
The author welcomes critical engagement:
Honest critique improves the framework. Version 0.6 is explicitly a proposal, not a decree. If critical review reveals fundamental flaws, those findings are valuable.
Phase 1 (2026-2027): Concentrated pilot testing
Phase 2 (2028+): Expanded adoption (contingent on Phase 1 success)
Success is not guaranteed. The framework may prove most valuable in limited contexts (grants, awards) rather than industry-wide. Testing will determine optimal use cases.
Studios and developers: Self-rate projects voluntarily to establish transparent positioning. Use GPC ratings when applying for grants, awards, or platform programmes that recognise the framework. Track growth trajectory across multiple projects.
Press and analysts: Adopt GPC ratings terminology when covering projects that have classified themselves. Use tier-based segmentation for market analysis and trend reporting when sufficient data exists.
Research community: Use GPCS data (when available) for industry studies, production scale analysis, and outcome correlation research. Contribute findings back to framework refinement.
GPCS is infrastructure for stakeholders who need precision. It serves professional contexts—grants, awards, platform partnerships, publisher due diligence—where verifiable classification provides operational value.
GPCS complements informal language, it doesn’t replace it. Developers will continue to use “indie” casually, and that’s appropriate. The framework provides precision when precision matters; everyday discourse remains unchanged.
GPCS is designed to evolve. Version 0.6 is a comprehensive proposal, not a final standard. Real-world implementation will surface refinements, and those refinements will strengthen the framework.
Pilot partnerships: Interested in implementing GPCS in your awards programme, grant body, platform programme, or organisation? Contact the author to discuss pilot structure, data sharing, and collaborative refinement.
Feedback and proposals: Submit critiques, edge cases, methodology proposals, or alternative frameworks via https://koldfu5ion.github.io/gpcs/
Community development: GPCS development will be conducted openly. Pilot data, feedback summaries, and methodology adjustments will be shared publicly (with appropriate anonymisation) to enable community review.
The goal is not adoption for its own sake. The goal is to discover whether this framework solves real problems for real stakeholders. If testing reveals that it does not, that finding is equally valuable.
The author thanks early reviewers for critical feedback that shaped the adoption framing, challenge acknowledgment, and testing approach in version 0.6. Constructive critique identifying adoption barriers and methodology concerns has been essential to producing a realistic proposal rather than an overconfident prescription.
Guy Cunis — For identifying the omission around publisher marketing incentives and suggesting that GPCS should demonstrate how it supports rather than limits publisher goals. His feedback shaped the new section on publisher portfolio strategy and the clarification that GPCS is B2B infrastructure, not consumer marketing language.
Maxim Somoleynko — For extensive section-by-section review, raising concerns about complexity and consumer adoption, and providing specific technical suggestions including team size tier refinements and financial scale considerations. His observation about “AAAA” terminology inflation reinforced the case for a formal classification system that protects against endless label escalation.
Sami Hayfal — For rigorous critical feedback questioning whether the problem being solved is severe enough to warrant adoption, highlighting complexity as a barrier to practical implementation, and stress-testing the value proposition across different stakeholder groups. His honest assessment that adoption potential is “extremely low” without addressing complexity concerns informed the framework’s emphasis on concentrated pilot testing rather than broad rollout.
Purpose: Provide a stable, scalable identifier for each project record that supports large registries and fast search without requiring users to type the full identifier.
gpc_id (recommended: UUIDv7; stored in full in the registry/certificate and used in APIs)short_id = first 10 characters of gpc_id (shown on badges by default)gpc_id and:
gpc_id and MUST NOT be required to derive itThis work is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).
You are free to:
Under the following terms:
For questions, feedback, or adoption enquiries, please open an issue on this repository or contact the author.
Devon Stanton Creator, Game Project Classification Standard (GPCS)
Game Project Classification Standard (GPCS): Bringing clarity to an industry that deserves better.