Clicky

The Quality Signal (Q*) – Q-Star – Google Quality Score for Organic Search Results

Strategic SEO 2025 - Hobo - Ebook

Disclosure: I use generative AI when specifically writing about my own experiences, ideas, stories, concepts, tools, tool documentation or research. My tool of choice is in this process is Google Gemini Pro 2.5 Deep Research and Chatgpt 5. This is not offical advice from Google. All content was verified as correct by Shaun Anderson. See our AI policy.This is a preview of Chapter 3 from my new ebook – Strategic SEO 2025 – a PDF which is available to download for free here. This article was first published on: 10 July 2025

The trial also brought to light a previously secret internal metric known as Q* (pronounced “Q-star”), which functions as a measure of a website’s overall quality and trustworthiness.

This revelation is significant because it apparently confirms the existence of a site-level quality score, something Google representatives have publicly and repeatedly avoided confirming (in these exact terms) for years.

According to trial exhibits, Q* is “an internal metric that assesses the trustworthiness of a whole website (most often the domain)”. To avoid confusion, Q*is completely different from the known Google Quality Score in Google Ads.

A crucial characteristic of Q* is that it is largely static and query-independent.

If a website earns a high Q* score, it is considered a high-quality, reliable source across all related topics for which it might rank.

This explains why certain authoritative domains consistently appear in search results for a wide range of queries.

Like the T* system, Q* is described as being “deliberately engineered rather than machine-learned,” reinforcing the theme of human oversight in Google’s foundational ranking systems.

  • Quality Score (Q★ – Trustworthiness): Google assigns pages a general quality score (often called “Q-star” or Q* internally) that reflects their overall credibility and utility, independent of any specific query. HJ Kim noted, “Q (page quality, i.e., the notion of trustworthiness) is incredibly important”* in ranking justice.gov. This quality score is largely static (does not fluctuate per query) and largely related to the site rather than the query”, justice.gov. Kim testified that “Quality score is hugely important even today. Page quality is something people complain about the most.” justice.gov He recounted that Google formed a Page Quality team ~17 years ago (which he led) when “content farms” flooded search results with low-quality pages justice.gov. In response, Google developed methods to identify authoritative sources and demote the content-farm pages, improving the overall trustworthiness of top results justice.gov. In short, Google tries to consistently reward pages that demonstrate experience, expertise, authority, and trust (E-E-A-T), and that reputation persists across queries. The slide contains the following text: Quality • Generally static across multiple queries and not connected to a specific query. • However, in some cases Quality signal incorporates information from the query in addition to the static signal. For example, a site may have high-quality but general information, so a query interpreted as seeking very narrow/technical information may be used to direct to a quality site that is more technical. 

A key input into the Q* score is a modern, evolved version of Google’s original breakthrough algorithm, Google Pagerank.

Testimony revealed that PageRank is still an important signal, but its function is now framed as measuring the “distance from a known good source“.

The system uses a set of trusted “seed” sites for a given topic; pages that are closer to these authoritative seeds in the web’s link graph receive a stronger PageRank score, which in turn contributes to a higher Q*.

The confirmation of a domain-level authority score like Q* stands in stark contradiction to years of public communications from Google.

“QRank”, Quality Scores & Authority Signals (2010s)

“‘Even the most fascinating content, if tied to an anonymous profile, simply won’t be seen because of its excessively low rank.’ Cited to Eric Schmidt, ex-Google, 2014.

In response, Google developed internal Page Quality metrics – sometimes referenced as “QScore” or “QRank” – to judge the overall authority, expertise, and trustworthiness of a page or site.

Google’s Hyung-Jin Kim (VP of Search) described this as a “page quality (i.e., the notion of trustworthiness)” score, often denoted internally as Q* (“Q-star”).

He noted in testimony that “Q is incredibly important”* and that Google formed a dedicated “Page Quality” team ~17 years ago when low-quality content farms were proliferating justice.gov.

The idea behind Q* is to algorithmically assess factors like a site’s reputation, authority, and compliance with quality guidelines, independent of any specific query.

Kim explained that this quality signal is “generally static across multiple queries and not connected to a specific query”, meaning if a site is deemed high-quality and reliable, that status boosts its rankings for all relevant searches justice.gov.

(However, query context can be factored in at times  – for example, even a generally high-quality site might be outranked by a more expert site for a very niche technical query justice.gov.)

Crucially, Google’s modern quality score integrates PageRank as one input.

Kim confirmed that “PageRank…is used as an input to the Quality score.” justice.gov In other words, a page’s base PageRank (its link-based importance) contributes to its overall “authority” score Q*, alongside other factors (possibly site reputation, expert reviews, etc.).

The Quality score thus acts as an aggregate authority metric – sometimes called an “authority score” – that can boost or dampen a page’s search rankings.

Pages with strong Q scores (earned via trusted backlinks, original content, good user signals, etc.) are systematically favoured.

This became especially important after Google’s 2011 “Panda” update, which targeted shallow content. Kim alluded to this, noting the team had started to tackle content farms that “paid students 50 cents per article”, flooding Google with thin pages justice.gov.

The solution was to algorithmically identify “the authoritative source” for a given topic and reward it justice.gov.

In effect, Google began demoting pages that had decent link popularity but poor overall quality, and promoting those with true authority. Kim emphasised that “Quality score is hugely important even today. Page quality is something people complain about the most.” justice.gov

“We figured that site is trying to game our systems… So we will adjust the rank. We will push the site back just to make sure that it’s not working anymore.” Gary Illyes, Google 2016

Indeed, with the rise of generative AI content, Google’s reliance on such quality signals has only grown (“nowadays, people still complain about [quality] and AI makes it worse”, he noted, justice.gov).

How Q works internally: Google treats the quality score as a mostly query-independent ranking factor attached to pages or sites.

Q* is “largely static and largely related to the site rather than the query”, justice.gov – essentially a measure of a site’s authoritative strength.

At query time, this quality score is combined with the query-dependent relevance score. While Google hasn’t publicly detailed the formula, one can think of the ranking system as first evaluating relevance (does the page match the keywords/intents?) and then adjusting results based on authority/quality.

A high Q* can significantly boost a page’s position, while a low-quality score can sink an otherwise relevant page.

In practice, Google’s regular updates and ranking tweaks often boil down to recalibrating this “authority” component.

Notably, many signals feed into Q*: PageRank and link signals (for authority), content assessments (for expertise), TrustRank-like signals (for trustworthiness), and even user engagement data.

For example, internal documents indicate Google also uses a “popularity signal that uses Chrome data” (likely aggregated Chrome usage statistics) as well as click feedback loops like NavBoost justice.govstradiji.com.

(NavBoost, described by Google’s Dr. Eric Lehman, is essentially a big table counting how often users click on a result for a given query over the past year stradiji.com – a way to boost pages that searchers consistently prefer).

These additional signals are beyond PageRank, but they complement the goal of Q*: to measure overall quality and user satisfaction.

PageRank itself, once the star of Google’s algorithm, now works behind the scenes as one signal feeding into these broader quality and ranking frameworks.

Deconstructing Site Quality via Exploit

Mark Williams Cook, of Candour, did some exceptional work in this area, and I was lucky enough to chat with him about it at the time – “The endpoint exploit we found literally had a metric called ‘site_quality’, which at minimum determined if you got some kind of rich results”.

Insights from a fascinating talk on “Conceptual Models of SEO” reveal a deeper, more nuanced layer to how Google evaluates websites, moving far beyond traditional metrics like keyword density or backlink counts.

Based on data allegedly retrieved from a Google exploit, the speaker outlines a compelling case for a master metric: a “Site Quality Score.”

This score appears to function as a foundational assessment of a site’s authority, directly impacting its ranking potential and eligibility for prominent search features.

A Foundational Ranking Gate

The core of the discovery is a “Site Quality Score” that Google allegedly calculates for every single website, scored on a scale from 0 to 1 at the subdomain level.

This isn’t just another data point; it acts as a critical qualifier.

The speaker revealed a specific threshold: sites with a quality score below 0.4 were found to be ineligible for Rich Results like Featured Snippets or “People Also Ask” boxes.

This implies that no amount of on-page optimisation for these features will succeed if a site hasn’t first passed this fundamental quality check.

It’s a heat race you must qualify for before you can even compete.

Measuring Trust: How Site Quality is Calculated

So, what constitutes this all-important score? According to Mark, who likes to reference Google patents, like I do, the calculation depends on whether Google has sufficient user data for a site.

  1. For Established Sites: The score is calculated based on signals that measure a site’s real-world brand authority. Google looks at how many times users specifically search for your brand or domain name, how often they select your site in the search results even when it isn’t ranked number one, and how often your brand name appears in anchor text across the web. In essence, Google is measuring your reputation and the trust users place in you.
  2. For New or Obscure Sites: When user data is scarce, Google uses a predictive model. It analyses the content on your pages to create a “phrase model”—a numerical representation or “shape” of your website. It then compares this profile to the profiles of sites for which it has already established quality scores. It predicts how good your site is likely to be based on its resemblance to known, trusted entities.

The AI Dilemma and the Helpful Content Correction

This predictive model had a significant vulnerability, which the speaker argues led to a massive influx of low-quality, AI-generated content ranking highly in 2022.

Because Large Language Models (LLMs) are trained on vast amounts of high-quality text, the content they produce naturally mimics the “numerical shape” of a good site, effectively tricking the predictive site quality model. Brand new sites could publish thousands of AI-generated pages and be initially judged as high-quality, leading to a surge in traffic.

Google’s fix, according to this model, was the Helpful Content Update.

This update was a system-wide correction designed to close the loophole.

It heavily penalised sites that exhibited high traditional authority metrics (like a large backlink profile) but had very low brand authority signals.

The update was a clear signal that Google was doubling down on genuine, established authority, making it immensely difficult for unknown sites to rank for competitive topics, regardless of their content’s superficial quality.

Mark reinforces this with a quote from former Google CEO Eric Schmidt, I’ve used in many SEO books now…: “Brands are the solution, not the problem. Brands are how you sort out the cesspool.

Other Key Concepts from Mark’s presentation:

  • Query Intent Classifiers: Google attaches labels to queries. The talk revealed classifiers like isDebunkingQuery (e.g., “is the earth flat”), medicalClassifier, and newsScore. The type of query dictates the type of results Google wants to show.
  • The Eight Semantic Classes: The talk unveiled eight “Refined Query Semantic Classes” that seem to cover almost all queries, with the largest being “short fact or bool” (a question with a yes/no or simple factual answer). This is a critical insight, Mark predicts these are the queries most likely to be lost to AI Overviews – and I agree 100%.
  • Content and Consensus: Google was found to generate a “consensus score” by counting the number of passages in content that agree with, contradict, or are neutral to the prevailing view. For a debunking query, only high-consensus content will rank. For a political query, Google may intentionally seek a mix of consensus and non-consensus results to provide balance. This means your content might be perfect, but if it doesn’t fit the specific “recipe” of results Google wants for that query type, it won’t rank.

For myself, Mark’s work brings a long journey to a satisfying conclusion in this area.

Based on the content warehouse API documentation leaked, the following attributes are explicitly identified as being applied in Q star (Q*).

Explicit “Applied in Q Star” Attributes

These attributes contain direct references in their descriptions stating they are used by the Q* system.

  • lowQuality: S2V low quality score converted from quality_nsr.NsrData. The documentation explicitly states: “applied in Qstar.”

  • siteAuthority: A site-wide authority signal converted from quality_nsr.SiteAuthority. The documentation explicitly states: “applied in Qstar.”

  • serpDemotion: A signal related to search engine results page (SERP) demotion. The documentation explicitly states: “applied in Qstar.”

  • scamness: A score from the scam model (0-1023). The documentation states: “Used as one of the web page quality qstar signals.”

  • unauthoritativeScore: A score indicating a lack of authority. The documentation states: “Used as one of the web page quality qstar signals.”

  • nsrOverrideBid: (Deprecated) An NSR override bid used for emergency overrides. The documentation states: “used in Q for emergency overrides.”*

Experimental Q Star Attributes

These are specific fields designed for testing new components within the Q* system (likely for “Live Experiments” or LEs).

Inferred/Contextual Q Star Attributes

While not explicitly tagged with “applied in Qstar” in this specific text snippet, the following are part of the same CompressedQualitySignals message and represent the core quality signals (NSR, Panda, etc.) that Q* typically aggregates or acts upon:

  • nsrVersionedData / nsrConfidence / vlqNsr (NSR – Site Quality)

  • pandaDemotion / babyPandaDemotion (Panda – Content Quality)

  • exactMatchDomainDemotion (EMD – Domain Quality)

  • productReviewPPromote... / productReviewPDemote... (Product Review Quality)

The documentation suggests Q* is primarily an aggregator of many primarily negative quality signals (Demotions, Scamness, Low Quality, Unauthoritative) and site-wide authority signals, used to adjust preliminary scoring.

Conclusion: Brand is the New Bottom Line

The ultimate takeaway from the video, the Google leak and trial testimony is that in the modern SEO landscape, “site quality” is largely synonymous with Trust and “brand authority.”

The days of outranking authoritative sites with clever technical SEO alone are dwindling.

The provided example of a rehabilitation-focused client outranking the NHS and other established entities, despite having a fraction of the backlinks, illustrates this perfectly.

Their success was attributed to a higher site quality score, earned through signals that proved their authority and trustworthiness in a critical “Your Money Your Life” (YMYL) category.

Ultimately, building a high-quality site in Google’s eyes means building a real brand that users seek out, trust, and mention.

This conceptual model suggests that long-term SEO success is now intrinsically linked to genuine brand-building efforts that resonate with real people, not just algorithms.

Shaun Anderson AKA Hobo SEO 2026
Shaun Anderson AKA @hobo_web 2026
Hobo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.