gpcs

GPCS Frequently Asked Questions

Companion document to: GPCS White Paper Version: 0.5.0 Last Updated: January 2026


This document addresses common questions raised during stakeholder review of the Game Project Classification Standard. For detailed methodology, refer to the cited sections in the main whitepaper.


General Framework Questions

Q: Why bond-style nomenclature (AAA/AA/A etc.)?

GPCS adopts familiar nomenclature precisely because AAA/AA are already industry convention. The framework formalises existing informal terminology, reducing adoption friction. We borrow only the scale from financial ratings, not the concept of creditworthiness. See Section 4.2 for detailed rationale.

Q: Is GPCS meant for consumers or professionals?

GPCS is designed as B2B infrastructure for professional stakeholders (grants, awards, publishers, platforms). Consumer-facing contexts may use colloquial aliases: C/B/BB → “Indie”, BBB/A → “Mid-Tier”, AA/AAA → “Major Production”. The underlying GPCS rating provides precision; aliases provide accessibility.

Q: Can ratings be used as public-facing aliases?

Yes, with appropriate context. Professional contexts should use the precise rating (e.g., “A/I1 — Verified”). Consumer-facing contexts (store pages, marketing) can simplify to intuitive categories (“Mid-Size Production”) whilst retaining the precise classification in backend systems. See Implementation Brief for suggested category mappings.


Rating Methodology Questions

Q: How are veteran developers in new studios rated?

Track record indicators explicitly account for “strong pedigree from AAA backgrounds” (A-tier) and “experienced leads with unshipped project in development” (BBB-tier). A newly formed studio with veteran leadership rates higher than a first-time team with no industry experience. See Section 6.1 tier definitions.

Q: Can free-to-play games earn AAA ratings?

Yes. GPCS rates production capacity and resource backing, not business model. A F2P game with 200+ staff, substantial infrastructure, and major publisher backing rates AAA. Monetisation model affects Independence markers (who controls monetisation decisions) but not Capacity ratings.

Q: How does monetisation affect ratings?

Monetisation model (premium, F2P, subscription) does not affect the Capacity rating. It may influence Independence markers if external parties control monetisation decisions. A self-published F2P game where the studio controls monetisation is I0; a publisher-mandated monetisation model may indicate I2. The Capacity rating reflects production scale regardless.

Q: Why isn’t team size differentiated above 200+?

The current 200+ band is flagged for calibration. Future versions may introduce 200-500 and 500+ bands if testing reveals meaningful operational differences. For v0.5, 200+ captures “large-scale studio infrastructure” which is the key distinction from smaller tiers. See Section 7.10 (Calibration Plan) for the validation process.

Q: Why isn’t budget differentiated above $50M?

Similar to team size, budget bands are v0.5 defaults subject to calibration. Real-world AAA budgets now regularly exceed $100M, with some approaching $500M+. Future versions will refine upper bands based on industry data. Current thresholds establish the framework structure; precise boundaries require validation.


Project Lifecycle Questions

Q: How are remakes/remasters/ports handled?

GPCS v0.5 rates original releases (1.0). Remakes, remasters, and ports present classification challenges (different resource requirements, potentially different publishers, varying degrees of new content) and are deferred to future versions. The original project’s canonical rating is preserved; subsequent versions may be rated separately if substantially different in production scope.

Q: Are ratings locked after 1.0 release?

Yes. The Release rating is canonical and immutable. Post-launch studio changes (layoffs, acquisitions, closures) affect future project ratings, not the released project’s rating. See Section 5.4, Rule 1, and Section 10.5 (Lifecycle Awareness).

Q: What happens if a publisher/funder exits before 1.0 release?

Their contribution is recorded in the Development History, not erased. The Release rating reflects the current state at launch (e.g., self-published = I0), but the certificate shows historical contributions that enabled the project. A studio that received 2 years of publisher funding before self-publishing still has that context visible. See Section 10.5 (Development History Model).

Q: How do historical ratings compare to modern thresholds?

Ratings are timestamped with version (e.g., BBB/v0.5). Historical ratings stand as-is; thresholds evolve through the calibration process (Section 7.10). A 2015 BBB rating remains BBB in the registry. Future tools may provide “equivalent modern rating” calculations for research purposes, but canonical ratings are immutable.


Edge Cases & Verification

Q: What prevents studios from renaming to reset track record?

Verification tiers (Verified, Audited) require evidence of track record including credited titles, company registry data, and LinkedIn profiles. Auditors can trace entity history through corporate records. Deliberate evasion would require fraudulent documentation and would invalidate certification. See Section 7.9 (Verification Tiers).

Q: How is complex nested ownership rated (e.g., Blizzard → ABK → Microsoft)?

The ultimate parent determines the I3 classification. Blizzard (owned by ABK, owned by Microsoft) rates I3 because it operates within a major publisher’s corporate structure. The rating uses the functional relationship (strategic control, budget allocation) rather than just legal ownership layers. See Section 6.4 (Independence Markers).

Q: What if ownership/relationship info is politically sensitive?

GPCS does not require disclosure of confidential commercial relationships. The “Confidential Publisher” category (Section 5.2) exists precisely for this purpose. Projects can acknowledge external backing without naming specific parties. Verification levels accommodate varying disclosure comfort: Unverified requires no evidence; Verified uses public sources only; Audited keeps sensitive details confidential with the auditor.


Platform & Context Questions

Q: Does team size mean different things for mobile vs. PC?

GPCS measures structural capacity (headcount, infrastructure, process maturity), which translates across platforms. A 200-person mobile studio has similar coordination complexity to a 200-person PC studio. Platform-specific output differences exist but are outside GPCS scope — the rating describes production context, not output volume or platform conventions.

Q: Are ratings platform-specific?

No. A project receives one rating regardless of which platforms it releases on. The rating reflects the production context (studio capacity, publisher backing, independence), not the target platform. A multiplatform release has the same rating across all platforms.


Known Limitations

Q: What are the acknowledged limitations of GPCS v0.5?

GPCS v0.5 explicitly acknowledges several limitations:

These limitations are documented to enable honest evaluation of framework applicability. Stakeholders should assess whether v0.5 defaults are appropriate for their use case or whether they should wait for calibrated thresholds in future versions.


Contributing Questions

Have a question not covered here? The author welcomes feedback and new questions that should be addressed. Contact information and contribution guidelines are available in the main whitepaper.


This FAQ is a companion document to the GPCS White Paper and is licensed under CC BY 4.0.