top of page

Reading Google's Playbook: What the Quality Rater Guidelines Actually Say About Authority

Season 1, Episode 04 | Conversations with Vibe & Logic | April 20, 2026

Google has a 200-page document explaining how it wants websites evaluated. It pays real people to score sites against it, yet most marketers have never heard of it. In this episode, Trevor Stolber walks through what's in the Quality Rater Guidelines, breaks down E-E-A-T and YMYL, and explains what Google measures when deciding if your brand is authoritative. If you've wondered why some sites consistently outrank you even when your content seems better, this episode explains the framework Google uses to make that call.

Watch the Episode


What We Cover

  • 00:04 What the Quality Rater Guidelines actually are and who uses them

  • 01:38 How quality raters influence the algorithm without directly changing rankings

  • 04:53 E-E-A-T explained: what each letter means and how Google evaluates it

  • 07:40 How the PAIS framework maps to E-E-A-T

  • 10:10 YMYL: Your Money or Your Life, and why those pages are held to a higher bar

  • 12:35 Common misconceptions about E-E-A-T, including whether an E score exists

  • 14:52 Building authoritativeness: links, citations, and knowledge graph alignment

  • 16:36 Deep and narrow vs wide and shallow: how to think about content strategy

  • 19:46 SEO vs AEO vs GEO: is there actually a difference?

  • 21:04 Brand mentions as the new links

  • 22:58 What to do on Monday morning


Full Transcript

[00:00:04] Jessica: Google has a 200-page document that explains exactly how it wants websites to be evaluated. It pays real people to evaluate websites against it, and most marketers have never heard of it. Today, Trevor's going to walk us through what it actually says and what it means for how you show up online. Trevor, we're going to talk about the Google Quality Rater Guidelines today. Before we get into what's in them, can you explain what this document actually is, who wrote it, who uses it, and why it exists?


[00:00:37] Trevor: Yeah, absolutely. A lot of people are surprised to hear that this document exists and that quality raters are even a thing. I think everybody assumes Google is just a pure algorithm, an army of servers, and that's it. But there really is an army of people who use these Quality Rater Guidelines. There are roughly 16,000 of them, although that number has changed over time, it was up to 25,000 at one point and they've reduced it, and you can see where that trend is going. So they have these quality raters, and they use this document called the Quality Rater Guidelines, which is actually published. You can go online and search for it. It's a massive PDF and there's actually a ton of really useful information in there. Very few people outside the SEO industry, and even a lot within the SEO industry, don't really read it or use it. But it's very, very useful. One important distinction: the quality raters and the scoring they give, the tests they do, they don't actually change the rankings directly. They use machine learning and the feedback from these quality raters to test out subtle perturbations in the algorithm. The quality raters will rate a set of results with a subtle algorithm change, and Google will take that feedback to inform whether the changes they made to the algorithm were good or not, and then decide which ones worked before making them public. More than anything, it gives you an insight into how Google thinks. It's literally a document telling you how they think about quality, and quality is very important, so we should pay attention to it.


[00:02:41] Trevor: There was also a recent Google endpoint leak. Mark Williams-Cook was doing some work for his Also Asked tool, by the way, great tool, and he discovered a pretty severe endpoint leak. He actually got the highest Google Bounty for it, which he donated to charity, which I think is awesome. What it uncovered is that there is actually a quality score, a domain quality score, for every single site that Google has. This is something the SEO industry has conceptualized for years, and this endpoint leak really did validate that it's there. It exposed a whole bunch of things, and what they're now trying to do is essentially post-search delivery QA on the user side to see how the results were perceived, how they were used, and what quality they were. They're morphing away from manual human quality raters and trying to do it all with AI and machine learning.


[00:03:52] Jessica: So just to sum up: the raters themselves are not the algorithm, they're more like the algorithm's training data. Is that the right way to think about it?


[00:04:04] Trevor: Yeah, absolutely. And I just say: if it moves the needle, why do you care? The things that quality raters determine, and the positive changes that align with the Quality Rater Guidelines, find their way into the algorithm. So anything you can do on your site that's aligned to what's in those guidelines will ultimately help you algorithmically. It's not direct, you can't one-for-one quantify it. But we're in the business of moving the needle. What's in the Quality Rater Guidelines is ultimately what they're trying to engineer the algorithm toward.


[00:04:53] Jessica: There's an acronym I hear you and Stephan mention pretty much every call, which is E-E-A-T. Can you walk us through what each of those letters actually means, not just the definitions, but how people should be thinking about it, and what Google actually looks for when it evaluates each one?


[00:05:19] Trevor: Yeah. This was originally just EAT, and a lot of people still refer to it that way. Google added the additional E for experience a couple of years ago. So it's experience, expertise, authoritativeness, and trustworthiness, and if that isn't a mouthful, what is. This is Google's concept, and again it just tells you how they think. The reason we're talking about it is because this concept is mentioned extensively in the Quality Rater Guidelines. They're trying to determine algorithmically how you validate experience, expertise, authoritativeness, and trustworthiness. Experience is fairly clear in concept, but how do you algorithmically determine that? How can you find the information that proves someone has experience? Expertise, again, different industries and different niches, this can be displayed in lots of different ways. Is it someone who has formal qualifications, or have they just had a long career and demonstrated expertise and been cited in a bunch of places? Authoritativeness is similar: what's the body of evidence that shows you're an authority on a topic? You could have a lot of experience and expertise in a subject, but if nobody has given you credit for it, you technically don't have that much authoritativeness. Authoritativeness is really who else has said you do this well. If you mention a topic, what are the first few names that come up when someone says it? That demonstrates authoritativeness. Trustworthiness is probably the most difficult thing to assess algorithmically. How clearly have you been demonstrated? How aligned are the things you say to the credentials you have? It's very difficult to actually ascertain trustworthiness from content and citations. Citations sort of suggest you have a degree of trust, but it's very difficult to algorithmically detect and understand that.

[00:07:40] Jessica: Your PAIS framework, does that in any way map to E-E-A-T or is it in a different realm?


[00:07:54] Trevor: It is a little bit different, but to a degree yes. If you take PAIS, which is a content framework, what problem does someone have, what's the application they're trying to do, someone with a qualified position can credibly say here's how you solve this problem. That's E-E-A-T. You need that criteria in order for someone to solve a problem credibly. One of the best descriptions of why EAT came out was this: back in the day, if you had good SEO and the right links, someone in theory could rank for "carrots cure cancer," which they obviously don't. The historical, traditional algorithmic way SEO worked meant somebody could rank for that term with all the right SEO metrics and links. But now they can't, because there's no credible E-E-A-T. There's no credible evidence, no authoritative people who have been cited, no proof that it's actually true. One thing I do dislike a little about the concept is that it doesn't give much room for new information. It appears harder for new science and new information to get credence because by its very nature it hasn't been cited yet. But E-E-A-T just attempts to deal with the worst-case version of that problem.


[00:10:10] Jessica: So while we're on the subject of acronyms, because we love them, there's a category of content in the guidelines called YMYL. Can you explain what that is and why those pages get evaluated more strictly?


[00:10:20] Trevor: Yeah, absolutely. More acronym soup in this industry, kind of like the financial industry, they just introduce terms to confuse people and make SEOs look smart. But hopefully we are. YMYL is Your Money or Your Life. This is actually why EAT was originally introduced: to stop what I mentioned about carrots curing cancer. Someone can't just say that without any E-E-A-T criteria. The reason it was applied to Your Money or Your Life is that anything that affects your money or your life, anything that can affect your wellbeing, should have a higher bar and some extra controls on the results being served. An interesting thing: a lot of people treat Google as a truth engine. Google was at pains to point out they're not a truth engine. If you search for something and it exists, they will find it because that's their job. But with anything that falls under Your Money or Your Life, medical, legal, financial decisions, stuff that's going to have a real impact on your life, there's a higher bar that you need to meet and E-E-A-T is one of the ways they do it. Now these are good practices to do anyway. It was originally just for a select few industries, but the breadth of the impact of E-E-A-T is broader now, and really there's no site that doesn't need to pay attention to it to some degree. Medical and financial, much higher. But it's something we generally consider with most sites.


[00:12:35] Trevor: It's definitely difficult to fake E-E-A-T, and that's the point. One of the biggest misconceptions is that people think there might be an E score. There is a site-wide quality score, and Google has come out and said there is no E score. It's not like your E is an eight and you can make it a ten. This is why we pay attention to the Quality Rater Guidelines: so we're aligning the content and structuring the site in a way that supports what's in those guidelines. There are things you can do, like adding schema, which doesn't necessarily give you E-E-A-T directly, but it is one of the things you can do. The right pieces of schema, the right citation of data, the right referencing, the right structure, the right links can all help. Really can't fake it, and that's sort of the point. The Quality Rater Guidelines are a window, like Google Search Console is a window into Google. The Quality Rater Guidelines are publishing data on how to be a quality site, what that even means, with specific examples. One thing you can do in the Quality Rater Guidelines is search for your primary keyword, your topic, your niche, or your industry. You will find examples specific to your industry. It's not just one algorithm. It's a bunch of subject-specific algorithms, all aligned to different knowledge graphs, and the things that are important vary. You need to be dealing with the niche-specific algorithm and paying attention to what the Quality Rater Guidelines say that aligns to your niche and knowledge graph.


[00:14:52] Jessica: I want to go a little deeper on authoritativeness because I think that's the one that's hardest to operationalize. When Google says a site or a brand is authoritative, what is it actually measuring?


[00:15:05] Trevor: You know, if you ask an LLM, it's whoever published the most listicles with themselves at the top, which exposes a little bit of a problem with LLMs. But seriously: it's about who links to you, who's saying you're an authority, who's citing you. That knowledge graph alignment, when terms and things are searched for, if you align with that knowledge graph and you've shown expertise in it, that's going to help a lot. John Mueller from Google has said it's hard to consider a site an authority on a topic if they produce fewer than 20 articles on it. When we tell people this, we don't say that means you need to go make 20 articles right now. But it gives you an idea that they're looking for people who go deep on a topic. You need to show that you have domain expertise, that you've really written about it. If you just publish a couple of blog posts on a topic, what does that really say about you? It doesn't say you're an authority. When you do go deep, this is how we align content and structure sites to help meet these criteria: Google starts to see you as a topical authority. And when they can associate a person with that content who has those E-E-A-T credentials, that really helps the content perform. Being mentioned in authoritative places, being cited in industry publications, or even just being mentioned alongside those things, is really, really useful.


[00:16:36] Jessica: That makes me think of when we've worked with clients on what to prioritize in their content: whether to go deep on one topic or wide across multiple topics. Can you talk a little more about that?


[00:16:53] Trevor: Yeah, and the question of deep and narrow or wide and shallow is somewhat a question of bandwidth and somewhat a question of positioning. You can't be everything to everyone, and we've said this before: you don't deserve to rank for everything, but you should be part of the conversation, and that's what Web Presence Intelligence helps with. Generally it's better to go deep and narrow. Prove you're the expert and authority on one subject. But you need to know which subjects, which topics to produce content on, and how they align with the demand on the search side. How many people are searching for these things and what are they looking for? And not just that, how does it match to their stage of the buyer journey? That's one thing the PAIS framework works with: it's not just "write about this topic," it's understand the audience, understand what they're searching for and what stage they're in, and produce the content that meets them where they are. Your classic content marketing 101 is just create lots of blog posts. The next step up is what's called a hub-and-spoke arrangement, where you have your topical center and spokes that support that content. That's a fairly decent approach. We take that a step further in the PAIS framework, but again, it's just making sure you have the content that meets the user's journey and answers their questions. And if you can come at it with a position of authority that aligns to those E-E-A-T criteria, that's really useful.


[00:19:46] Jessica: You've talked about the connection between how Google evaluates authority and how LLMs decide who to cite. Can you draw that line for our listeners?


[00:19:56] Trevor: Yeah, and this is where there's a lot of talk in the industry currently about AEO, GEO, or doing SEO for AI. There's one school of thought that they're fundamentally different things. The other is it's the same thing called a different name, and I'm much more in that camp. It's the same thing called a different name. However, there is one big distinction, and the main distinction is that LLMs don't really use links in the classical sense. Links and PageRank were Google's foundational differentiator that made search engines successful: basically counting links as votes, which they still do today and it still has a very big part to play. What we've seen over time is, and I think Rand Fishkin from SparkToro coined this, brand mentions becoming the new links. They sort of are. Links are still very important, but brand mentions are a social signal, or a signal that supports E-E-A-T. LLMs generally like to see that, and because they're not evaluating links in the same way, they're more evaluating brand mentions. So LLMs have a significantly different distinction in that they don't use links anywhere near the same way that traditional search engines do. But what they do see are a lot of the E-E-A-T signals, and where people are mentioned. A lot of those things work for both traditional search and LLMs. That's really the big difference between the two. I'm much more in the camp that GEO is just SEO in disguise. I have yet to see a good description of what works for AEO and GEO that isn't just a list of SEO best practices or things you should be doing anyway. And this is really where WPI intersects them both. Web Presence Intelligence: we were doing it before AI, but it's just about where you need to be visible. That's fundamentally what it is, and it's agnostic to search engines or LLMs. There are definitely some nuances, and this is why "it depends" exists in the industry. But if you're going to say it depends, qualify why. It's unacceptable to just say it depends without a qualifier. I will always call out SEOs who say it depends without explaining further.


[00:22:58] Jessica: For someone listening who runs marketing at a mid-market company and they've just absorbed all of this, what do they actually do on Monday morning? What's the first move?


[00:23:09] Trevor: Good question. One of the things I mentioned: go search for Quality Rater Guidelines, find the document, and then search that document for your key terms in your industry. That's really going to help. I would also take a good look at author pages and your team. You should have a team page or an author page. If you have content publishers, they should be cited and linked to. Do a thorough audit of your team or author pages and make sure you're listing all appropriate details. You can link all of their social profiles and credentials. There's a schema type called "sameAs" which is essentially linked with, and there's a bunch of stuff you can do that helps make the connections between all of the different things that support E-E-A-T and assign them to a person, and then link that person to the content they've written on your site. You're leveraging that person's E-E-A-T for your content. That's really important. And take a deep look at your content. Can you objectively say, yes, my website says I'm an authority on that topic? Search for your main keyword on your own website. It's actually surprising how often those words won't appear anywhere on your site. Not only do you need the content, are you even saying the right things about your topic? And then search for your brand name and your core keywords and see who else is mentioning you alongside them, both in the LLMs and traditional search. Do you get a knowledge panel? Have you claimed it? What are the third-party references to your business? Do they exist? Have you claimed them? Can you influence them? Because how your brand shows up really affects how you're going to be visible in the LLMs as well as regular organic search.


[00:25:52] Jessica: The bios, the bylines, the about pages: how would they determine who should be prioritized there? You don't want to muddy the waters with people who aren't writing about or expert in what you want to be found for, but you also need some level of presence. How do you distinguish between those two?


[00:25:56] Trevor: Yeah, the nice thing about a lot of the things you do to help with E-E-A-T is that if you think about a team page with a team of 20, in most companies you'll probably have your top two or three who publish 90% of the content. The typical 80-20 principle. And those author pages are usually template-level pages. So you can make the schema changes and all of the additional schema types you should add on a team or author page, make that change once and it's on all of them. You still have to fill in the details for each person individually, but you've made the structural change. There's not actually a prioritization. It's not like you only focus the E-E-A-T stuff on your top two authors. They're going to be a priority naturally just because of the content they've written. Then go look and make sure that the content they've written is actually linking to their team or author page, because that's often not done. And make sure all of the details are actually entered. What you often see is a lot of empty fields and missing data. Those would be the first few things to really look at.


[00:27:14] Jessica: Based on clients you've worked with, what is the most common mistake that marketing leaders make? What would you want them to know to go look at and fix first?


[00:27:43] Trevor: I've got an answer to that and I'm going to start with "it depends," and I'll tell you why, because you should always qualify it. It depends on the size and nature of the organization. In a smaller business, this can be a checklist. After I've published this, did I link to the author? Did I go find the relevant internal links? Is there a process to do that? Did I publish that content on social media? Is the person who wrote it also sharing it? Are we leveraging their E-E-A-T? For scientific, medical, or news content, does it need another level: medically reviewed by, or reviewed by the editor? Is that person listed? Do you have a process for that? On the enterprise side, where you have proper governance-level content publishing processes, there's a lot more. Sometimes content has to go through legal. Is that part of the governance process? Are we making sure all of the right boxes are checked? Are we entering all of the details? Because even with the best intentions, if you don't have the right process, those things get missed. It's really easy to just click the publish button, but that's really just the first step.


[00:29:14] Jessica: Awesome. Thank you for this. To wrap up: Google has been telling us what it values for years. It's all in a document that most people have never read. If you take one thing from today, let it be this: authority is not something you claim. It's something you earn, and it shows up across every surface where your buyers are making decisions. That's what we're going to keep digging into next week. Stephan takes the wheel and we're going to talk about what your content calendar might actually be doing to work against you. We'll see you then.

Comments


bottom of page