CiteGist

    How CiteRank works

    CiteRank measures how visible a voice is across AI systems and the open web. It's externally grounded — every component checks signals the rest of the world can verify. We don't credit the platform itself.

    Methodology v5 · Phase 1 active · Last updated May 4, 2026

    What goes into the score

    Peer Standing (PS)Weight 30

    DR-weighted citations and mentions across the open web

    Moves: Slow (months)

    Citation Rate (CR)Weight 25

    Named citations in answers from ChatGPT, Claude, Gemini, and Perplexity

    Moves: Volatile (weeks)

    Topical Authority (TA)Weight 20

    Whether AI answers name this voice when asked about their declared topics

    Moves: Volatile (weeks)

    Content Velocity (CV)Weight 15

    Posting cadence across owned channels (Substack, podcast, YouTube, etc.)

    Moves: Direct lever

    Query Coverage (QC)Weight 10

    Whether this voice appears across all three AI query archetypes (top voices, frameworks, recommendations)

    Moves: Volatile (weeks)

    Each dimension in detail

    Peer Standing (PS)· 0–30

    What it captures: branded mentions and citations across the open web, weighted by source quality.

    How it's computed: DataForSEO branded-mentions search → de-dup → exclude domains the voice owns (see "Self-citation exclusion" below) → tier-weight against an internal quality_sources table (DR 1–10) → normalize 0–30.

    Self-citation exclusion: A voice's own website doesn't count. If they list oneusefulthing.org as a controlled domain, mentions on oneusefulthing.org are filtered before scoring.

    Why it's slow: PS reflects the cumulative weight of where a voice has been published or referenced. It changes over months, not days.

    Citation Rate (CR)· 0–25

    What it captures: how often AI assistants cite this voice's owned content when asked about their topics.

    How it's computed: Direct queries about the voice on each declared topic ("Tell me about {voice} in the context of {topic}", "What are {voice}'s key contributions to {topic}?") → run against ChatGPT, Claude, Gemini, Perplexity → check if any annotation URL matches the voice's controlled domains → CR = cited probes / total probes × 25.

    Anti-gaming guard: if only one of four LLMs cites the voice, the score is halved (cross-platform consistency check).

    Why it's volatile: AI citations shift week to week as model providers re-train and re-rank.

    Topical Authority (TA)· 0–20

    What it captures: whether the voice's name shows up when AI is asked about their topic, without being asked about the voice directly.

    How it's computed: Associative queries on each declared topic ("Who are the leading voices on {topic}?") → run against the same four LLMs → 3-state classification on the response: unprompted (full credit, name appears in narrative), on_list (half credit, name in an enumerated list), absent (no credit) → normalize 0–20.

    Anti-gaming guard: if fewer than three of four LLMs mention the voice, the score is zeroed (cross-platform consistency check).

    Why TA can be 0 even with high CR: CR asks "does AI cite this voice when answering questions ABOUT them"; TA asks "does AI surface this voice when discussing the topic at all." A voice can be widely cited (high CR) but not yet associated with the topic (TA=0).

    Content Velocity (CV)· 0–15

    What it captures: how active the voice is on owned channels (publishing cadence).

    How it's computed: parse RSS feeds (Substack, podcast) and channel APIs (YouTube, Twitter, TikTok, Instagram, LinkedIn) → assemble all post dates → score recency (0–6: most recent post within 7 / 14 / 30 / 60+ days) + frequency (0–5: posts per week over last 90 days) + consistency (0–4: months of activity).

    Why it's a direct lever: unlike PS/CR/TA, the voice can move CV by posting more.

    Query Coverage (QC)· 0–10

    What it captures: whether this voice is surfaced by AI across all three major query archetypes — not just one phrasing of the question.

    The three archetypes:

    • Top voices — "Who are the leading voices on {topic}?"
    • Frameworks — "What frameworks define {topic}?"
    • Recommendations — "Who would you recommend I follow for {topic}?"

    How it's computed: for each archetype, we check whether at least one LLM probe (across the declared topics and all four engines) returned a classification of unprompted or on_list rather than absent. If all three archetypes show the voice, QC = 10. Two archetypes = 6–7. One = 3. None = 0.

    Why it complements TA: TA rewards breadth (how many topics × LLMs return the voice). QC rewards depth of association — a voice can have solid TA by appearing frequently in one query pattern while being invisible to the others. QC catches that gap.

    Zero additional LLM cost: QC derives from the same probe results already run for Topical Authority. It adds no inference cost per scoring run.

    How we handle run-to-run variance

    AI responses vary between runs. The same query asked twice can return different lists. CiteRank's CR and TA dimensions are sampled across multiple runs and aggregated as a rolling 4-week median, with a confidence band reported alongside each score.

    A wide confidence band means the AI's answer for this voice is unstable — not necessarily that the voice is invisible. Solo Authority and Team tier runs sample more aggressively to tighten the band.

    AI-Recognized — a separate monthly check

    Once a month, we ask each LLM "Tell me about {voice} in the context of {topic}." A Claude Haiku judge verifies whether the response accurately describes the voice (matches their LinkedIn role, primary domain, and recent works). If 3 of 4 LLMs return accurate descriptions, the voice earns the AI-Recognized badge.

    AI-Recognized doesn't feed the composite score — it's a binary milestone, separate from CiteRank. Designed to give early-stage voices an achievable goal beyond raw score climbing.

    What we don't credit

    • We exclude self-citations: a voice's own website doesn't add to their PS or CR score.
    • We require multi-engine consistency: a single LLM citing the voice doesn't carry the score; ≥3 of 4 are required for full credit on TA, and ≥2 of 4 for full CR.
    • We don't credit on-platform activity. CiteRank measures the rest of the world — not what happens on CiteGist itself. The platform cannot credit itself.

    How rankings work

    Voices are ranked within cohorts — niche × follower-tier — not against the entire platform. A nano-tier (under 25K followers) machine-learning voice is ranked against other nano-tier ML voices, not against an established educator with 500K followers.

    Cohorts publish leaderboards only when at least 25 voices are in them. This prevents "Top 5 — only 4 voices" rank-inflation.

    © 2026 CiteGist, Inc. All rights reserved.