As a search quality rater, you would be working on different evaluation projects, but the guidelines provided in this search quality brief would be applicable in many different scenarios.
The Search Experience
0.1 The Purpose of Search Quality Rating
0.2 Raters Must Represent People in their Rating Locale
0.3 Browser Requirements
0.4 Ad Blocking Extensions
0.5 Internet Safety Information
0.6 The Role of Examples in these Guidelines
With a plethora of information available on the World Wide Web, individuals search the Internet with different motives and different purposes that could range from looking for adorable, cute pictures of puppies to finding information regarding certain medical conditions.
Therefore, it must be ensured that the search results being returned to an individual’s query should be unoffensive, not contain explicit material unless specifically asked for, and should deliver the required information from a trustworthy source.
Your role as a search quality evaluator will not impact the ranking of any website or search result rather it is focused on determining the quality of a search engine in the terms of the results it produces and how relevant and effective it is in delivering what a user wants from it in relevance to their demographics. To further enhance the experience you must know how to use operators and other advanced search options.
You must be aware of the task language of the people of your locale, in case of the contrary please contact your employer. Ratings should be unbiased, not represent your personal beliefs, and should be according to the instructions present in this manual.
Check with your company about browser requirements and allowed extensions/addons.
Do not use adblocking extensions unless clearly instructed to do so.
Do not visit websites that ask you to download certain files. Always have an antivirus and anti-spyware installed and active on your system.
The examples used in these guidelines were the latest at the time of its development.
A Page Quality (PQ) rating task consists of a URL and a grid to record your observations, to guide your search. The main task is to analyze whether the page achieves its objective or not.
PQ rating requires an in-depth understanding of websites. We’ll start with the basics. These are a must-read.
Before you begin page quality rating you must ensure that you truly understand the purpose of a webpage the purpose can start anywhere from earning money or just disseminating news or information. First off you must do so because then you can only rate a page on its quality and given how effective it is if you know its objective or purpose.
Some pages can impact a person’s financial status, safety, well-being, or health. Such pages are known as “Your Money or Your Life” pages. Therefore, the rating standards for these pages are of high quality otherwise they can impact a person’s life negatively. Some examples of such pages are government pages, websites of financial institutions, or verified medical websites.
Content on any web page is can be categorized into three main types. You need to understand and differentiate between these based on your best judgment.
Main Content (MC) is any part of the page that directly helps the page achieve its purpose.
Supplementary Content (SC) is also important. SC can help a page better achieve its purpose or it can detract from the overall experience.
Many pages have advertisements/monetization (Ads). Without advertising and monetization, some web pages could not exist because it costs money to maintain a website.
To know more about a website and develop an understanding look at the web pages in a connection rather than separate entities. Websites are very usually very eager to talk about themselves. There are three facets to this.
Finding the Homepage
Examine the landing page of the URL in your PQ rating task or Find and click on the link labeled “home” or “main page” or try using “Ctrl-F”(command-F on a Mac) to search the page for the text “home”.
Finding Who is responsible for the Website
Finding About Us Information
An important part of PQ rating is understanding the reputation of the website. If the creator of the MC is different from the creator of the website, it’s important to understand the reputation of the creator as well. Your job is to thoroughly gauge the website repute as well as the credibility of the creator of MC through research, customer reviews, and trustworthy review websites.
At a high level, here are the steps of Page Quality rating:
Here are the most important factors to consider when selecting an overall Page Quality rating:
High-quality pages exist for almost any beneficial purpose, from giving information to making people laugh to expressing oneself artistically to purchasing products or services online. What makes a High-quality page? A High-quality page should have a beneficial purpose and achieve that purpose well.
For each page, you evaluate, spend a few minutes examining the MC before concluding it. Read the article, watch the video, examine the pictures, use the calculator, play the online game, etc. Remember that MC also includes page features and functionality, so test the page out. For example, if the page is a product page on a store website, but at least one product is in the cart to make sure the shopping cart is functioning. If the page is an online game, spend a few minutes playing it.
The highest quality pages are created to serve a beneficial purpose and achieve their purpose very well. The distinction between High and Highest is based on the quality and quantity of MC, as well as the level of reputation.
Low-quality pages may have been intended to serve a beneficial purpose. However, Low-quality pages do not achieve their purpose well because they are lacking in an important dimension, such as having an unsatisfying amount of MC, or because the creator of the MC lacks expertise for the page. If a page has one or more of the following characteristics, the Low rating applies:
Following are some types of lowest quality pages
There are two types of Medium quality pages:
Page quality rating tasks might seem difficult at first but with this guideline, in place, we recommend that you read it a couple of times and refer to sections so that you can develop a total understanding of what is required from a search quality evaluator. But do understand that these guidelines do not cover each aspect of every page out there, you’re required to develop your acumen and you should rate a page high or low quality depending on your understanding as well as the requirements of your locale.
Page quality tasks are usually broken down into four major steps. But these are not limiting and can be reduced to increase depending on the requirement of the task, regional requirements, and specific business requirements.
You must evaluate the expertise, authoritativeness, and trustworthiness of a website as well as the credibility and repute of the creators of its MC(Main Content). This is especially important when different pages on the website have different creators.
We may not always know the author of the specific encyclopedia article, and therefore must rely on website reputation research to determine the E-A-T of the article. High and Highest quality ratings should only be used for encyclopedias with very good reputations for accuracy and expertise, where the article itself is well-researched with appropriate references cited.
Some pages are temporarily broken pages on otherwise functioning websites, while some pages have an explicit error (or custom 404) message. In some cases, pages are missing MC as well. Please think about whether the page offers help for users—did the webmaster spend time, effort, and care on the page?
For this category keep the following points in mind :
People rely on smartphones to do a lot for themselves nowadays. But at the same time we face some challenges that we don’t face while using a desktop computer or laptop:
Understanding the query is the first step in evaluating the task. Remember, a query is what a user types or speaks into a mobile phone.
All queries have a task language and task location (referred to in rating tasks as the “Locale”). The locale is important for understanding the query and user intent. Users in different locations may have different expectations for the same query.
Sometimes users tell search engines exactly what kinds of results they are looking for by adding the desired location in the query, regardless of their user location. We’ll call this location inside the query the “explicit location.” The explicit location makes queries much easier to understand and interpret.
Many queries have more than one meaning. For example, the query [ apple] might refer to the computer brand or the fruit.
We will call these possible meanings query interpretations.
Remember to think about the query and its current meaning as you are rating. We will assume users are looking for current information about a topic, the most recent product model, the most recent occurrence of a recurring event, etc. unless otherwise specified by the query.
It can be helpful to think of queries as having one or more of the following intents.
There are different types of Result Block for every query. To interpret them individually based on requirements.
We understand that raters using different phones, operating systems, and browsers may have different experiences. In general, do what you would do naturally, and rate based on your experience.
Rating Meter:
Rating may be assigned in between any of the two labels as well.
Each result needs to be rated for Needs Met Rating. The part that you rate would depend upon both the query and result block.
Special Content Result Block: This should play the largest role in your rating. Some SCRBs may have links to landing pages.
If most users would not click then the rating should be based on block content alone.
If users may click on it then the helpfulness of landing pages is also considered in rating.
Web Search Result Block: Needs a click and landing pages should be evaluated.
Device Action Result Block: Rated on the helpfulness of the action.
For example, What does Love mean
SCRB:
Rating based on the content inside the block as there are no obvious landing pages.
Web Search Result Block:
Rating based on the content of the landing page as the user would need to click on it.
Some more examples where rating should be based on the content inside the block are as under.
Note: This section applies to Needs Met ratings. For SCRBs that have landing pages you may be asked to provide page quality ratings and for that refer to section 14.0.
It’s a special rating category used when
FullyM should be reserved for complete and perfect responses. Some scenarios where FullyM is appropriate are.
It may apply in other situations as well. Go for a lower rating when in doubt.
Note: If the result block is almost FullyM but may need additional info HM rating is more appropriate.
Sometimes the specificity of the query may also need to be considered.
It meets the requirements of most users and results are a good fit for the query. Information pages like encyclopedias must be highly credible to fall in this category. Scientific information pages must conform to the consensus.
A single query may have many HM results. It is especially important for queries that have multiple on-topic results.
These are results that are satisfying for many users or highly satisfying for some users. Results should still fit the query but they have fewer qualities than HM.
These results are not inaccurate and are generally average to good.
Results that are helpful for a few users. It may have some minor inaccuracies or be outdated. Results may also be too specific or too broad for a higher rating.
Note: Mobiles have higher costs per click and the clicks depend upon titles. Exaggerated or, is leading titles may also be SM or below.
These results are helpful to either very few or no users at all. Results may be unrelated to the query or incorrect. It fails to meet user requirements or may be completely outdated information.
Results should never be offensive or unpleasant.
Needs Met Rating depends upon both query and results
Page Quality rating does not depend on the query
Page quality reading only needs to be given when there is a slider and only depends upon the quality of the content.
Some Guidelines are as under
Assignment of flags does not depend on the query. Screenshot of flags
Example
Porn needs to be flagged as porn even if the query is specifically asking for porn.
FailsM will be assigned if a query is non-porn related.
For example
Some queries may have both porn and non-porn intended meanings. They will be rated as though the query is non-porn-related.
Example:
Queries that have clear porn intents should be rated as per relevance to the query. But the page is still to be flagged as porn
Child Pornography
Note: Federal laws of task location need to be followed.
If an image depicts someone minor or appears to be a minor in sexually explicit conduct it is called child pornography. It may contain actual minor or computer-generated imagery. Even drawings, cartoons, anime, or sculptures depicting sexually explicit conduct for minors are child pornography.
Even if the pornographic images depict children in a literary, political, artistic, or scientific context, please forward them to your employer. URLs are required to be forwarded as instructed.
Depiction of minors’ genitals does not require them to be uncovered. They can be considered Child pornography even if covered. A person pretending to be a minor in sexually explicit content is not child Pornography but if you cannot tell whether the person being shown in such a scene is over the legal age then it is considered Child Pornography.
If the language of the result is not the task language or the language used by the majority in the task location it is to be flagged as Foreign.
Important
Most Foreign Language pages would be rated FailsM as people in the task location wouldn’t understand it, unless a query specifically asks for a foreign page, then assign FullyM.
Note: If you can’t read the foreign language content page you don’t have to assign a page quality rating.
The flag is used when an error message load or the landing page is completely blank. The flag is assigned on the loading page, not the result block.
For Example
A flag is not assigned for:
Did not Load pages are also assigned FailsM as they are useless. If the page partially loads, the Needs Met Rating is assigned based on the available content. Page Quality Rating is not assigned if a page cannot be evaluated.
Some rating tasks may ask you to rate something that might be upsetting or offensive. Such rating is assigned based on the typical user of your locale.
Some queries may have multiple interpretations. Conform to the following guidelines while dealing with them:
Users may have dual intent of either visiting a website or visiting some store in-person. If the intent is not clear then results satisfying only one of the intents is never rated Fully Meets.
Queries may be specific or general as shown in the table, but the rating should always depend upon how helpful the result is. If the query is a broad category then the most popular results according to locale should be very helpful.
Some queries require very recent information.
Some queries demand breaking news. This is when old pages will be rated FailsM. Some queries may be looking for recent or timeless information. Page Quality rating is generally not based on freshness unless the page has been abandoned.
Note: Websites show different dates depending on their settings, if you want to know the date of the content, try the “Wayback Machine” on the Internet Archive.
If queries are obviously misspelled, rate the results on the user intent. For queries that are not very obvious in misspelling, rate the results as per the user spellings.
Follow the given examples
URL queries may be issued by some users to find info about the website so the rating for any result that is not the website itself may not be FailsM. Usage statistics, however, are not what most users are looking for.
User queries may have the intent to buy or seek information and they may be unspecified as well. Users also enjoy online browsing and need authoritative information on the products while scouting them online. You have to keep in mind all these factors while assigning Needs Met Rating to results. Results for product queries may be important for both your money and your life (YMYL.)
Users can travel different range of distances while looking for different services. For example, they may be willing to travel farther for a clinic than for a famous coffee shop. So you have to use your judgment on the meaning of “nearby”.
Even if you can understand English, the Needs Met Rating is supposed to be based on the users in your Locale. Unless a query specifically asks for results in English, for a non-English locale, results in English should be considered useless.
Important: There may be many languages to consider including task language, official language, local dialects, etc. When you face doubt, prefer the results in task language unless the query specifically indicates otherwise.
Here are some examples that include proper nouns in non-English queries.
Here are some examples where users would still prefer results in the local language.
There may be queries where English language results would satisfy the user more even if the query is in task language. You will have to use your judgment for assigning ratings.
In locales where English is the official language, results in English would be preferred.
You have to judge whether a user is looking for the meaning of a word or the word is commonly known and intent may be different from Dictionary results. You should mostly reserve Dictionary results for queries that include “What is” or “What is the meaning of.”
Important: If a commonly known word is used in the query, an SM rating may be appropriate for a dictionary result.
Content marketing allows SaaS businesses to build trust and authority. By creating valuable and consistent content, you can attract and engage potential customers at every stage of the buying journey. Follow our guide to build an effective content marketing strategy for your software.
As of Dec 2022, the total market size of SaaS businesses is estimated to be around $272.5B. Many entrepreneurs and marketers are looking for ways to stay up-to-date with the latest trends in SaaS technology, strategies, and marketing. One great way to do that is to follow some of the best SaaS blogs out there. ... Search Quality Evaluators Guidelines – Brief
SEO is crucial for software companies; 80% of the decision-makers chose to educate themselves from informational articles rather than targeted ads. SEO can help SaaS companies reach their target audiences more quickly and effectively than ever before, helping them grow their user base rapidly.