# How CiteRank works

_Methodology v5 · Phase 1 active · Last updated Apr 30, 2026_

CiteRank measures how visible a voice is across AI systems and the open web. It's externally grounded — every component checks signals the rest of the world can verify. We don't credit the platform itself.

## What goes into the score

| Dimension | Weight | What it measures | How fast it moves |
|---|---:|---|---|
| Peer Standing (PS) | 30 | DR-weighted citations and mentions across the open web | Slow (months) |
| Citation Rate (CR) | 25 | Named citations in answers from ChatGPT, Claude, Gemini, and Perplexity | Volatile (weeks) |
| Topical Authority (TA) | 30 | Whether AI answers name this voice when asked about their declared topics | Volatile (weeks) |
| Content Velocity (CV) | 15 | Posting cadence across owned channels (Substack, podcast, YouTube, etc.) | Direct lever |

> Phase 1 ships with these four dimensions. Query Coverage — a fifth dimension in the original v5 spec — is deferred to Phase 1.5 as an Authority-tier Content Gap Discovery tool. Until it ships, Topical Authority's cap is raised to 30 (absorbing QC's 10 weight points) so the score still totals 100.

## Each dimension in detail

### Peer Standing (PS) · 0–30

**What it captures:** branded mentions and citations across the open web, weighted by source quality.

**How it's computed:** DataForSEO branded-mentions search → de-dup → exclude domains the voice owns (see "Self-citation exclusion" below) → tier-weight against an internal `quality_sources` table (DR 1–10) → normalize 0–30.

**Self-citation exclusion:** A voice's own website doesn't count. If they list `oneusefulthing.org` as a controlled domain, mentions on `oneusefulthing.org` are filtered before scoring.

**Why it's slow:** PS reflects the cumulative weight of where a voice has been published or referenced. It changes over months, not days.

### Citation Rate (CR) · 0–25

**What it captures:** how often AI assistants cite this voice's owned content when asked about their topics.

**How it's computed:** Direct queries about the voice on each declared topic ("Tell me about {voice} in the context of {topic}", "What are {voice}'s key contributions to {topic}?") → run against ChatGPT, Claude, Gemini, Perplexity → check if any annotation URL matches the voice's controlled domains → CR = cited probes / total probes × 25.

**Anti-gaming guard:** if only one of four LLMs cites the voice, the score is halved (cross-platform consistency check).

**Why it's volatile:** AI citations shift week to week as model providers re-train and re-rank.

### Topical Authority (TA) · 0–30

**What it captures:** whether the voice's name shows up when AI is asked about their topic, without being asked about the voice directly.

**How it's computed:** Associative queries on each declared topic ("Who are the leading voices on {topic}?") → run against the same four LLMs → 3-state classification on the response: `unprompted` (full credit, name appears in narrative), `on_list` (half credit, name in an enumerated list), `absent` (no credit) → normalize 0–30.

**Anti-gaming guard:** if fewer than three of four LLMs mention the voice, the score is zeroed (cross-platform consistency check).

**Why TA can be 0 even with high CR:** CR asks "does AI cite this voice when answering questions ABOUT them"; TA asks "does AI surface this voice when discussing the topic at all." A voice can be widely cited (high CR) but not yet associated with the topic (TA=0).

### Content Velocity (CV) · 0–15

**What it captures:** how active the voice is on owned channels (publishing cadence).

**How it's computed:** parse RSS feeds (Substack, podcast) and channel APIs (YouTube, Twitter, TikTok, Instagram, LinkedIn) → assemble all post dates → score `recency` (0–6: most recent post within 7 / 14 / 30 / 60+ days) + `frequency` (0–5: posts per week over last 90 days) + `consistency` (0–4: months of activity).

**Why it's a direct lever:** unlike PS/CR/TA, the voice can move CV by posting more.

## How we handle run-to-run variance

AI responses vary between runs. The same query asked twice can return different lists. CiteRank's CR and TA dimensions are sampled across multiple runs and aggregated as a rolling 4-week median, with a confidence band reported alongside each score.

A wide confidence band means the AI's answer for this voice is unstable — not necessarily that the voice is invisible. Solo Authority and Team tier runs sample more aggressively to tighten the band.

## AI-Recognized — a separate monthly check

Once a month, we ask each LLM "Tell me about {voice} in the context of {topic}." A Claude Haiku judge verifies whether the response accurately describes the voice (matches their LinkedIn role, primary domain, and recent works). If 3 of 4 LLMs return accurate descriptions, the voice earns the **AI-Recognized** badge.

AI-Recognized doesn't feed the composite score — it's a binary milestone, separate from CiteRank. Designed to give early-stage voices an achievable goal beyond raw score climbing.

## What we don't credit

- We exclude self-citations: a voice's own website doesn't add to their PS or CR score.
- We require multi-engine consistency: a single LLM citing the voice doesn't carry the score; ≥3 of 4 are required for full credit on TA, and ≥2 of 4 for full CR.
- We don't credit on-platform activity. CiteRank measures the rest of the world — not what happens on CiteGist itself. The platform cannot credit itself.

## How rankings work

Voices are ranked within cohorts — niche × follower-tier — not against the entire platform. A nano-tier (under 25K followers) machine-learning voice is ranked against other nano-tier ML voices, not against an established educator with 500K followers.

Cohorts publish leaderboards only when at least 25 voices are in them. This prevents "Top 5 — only 4 voices" rank-inflation.

---

_Last updated: Apr 30, 2026 · See your CiteRank: https://citegist.com/dashboard?tab=score_
