In addition to scanning individual spam pages, search engines can also identify traces of entire root domains or subdomains that could mark them as spam. Just as on individual pages, search engines can track the types of links and the quality of the referrals sent to the website. Sites that are obviously engaged in the manipulative activities described above in a consistent or serious way can see that their search traffic is affected or even their sites are not being indexed.

Websites that have earned the trust of search engines are often treated differently from those that are not. SEO specialists often comment on the double standards that exist for evaluating sites of major brands and sites that are important to users compared to newer, independent sites. For search engines, trust is probably related to the links your domain has earned. If you post low quality, duplicate content on your personal blog, then buy multiple links from spam directories, you will likely experience significant ranking problems. However, if you publish the same content on the Wikipedia site, even with the same spam links pointing to the URL, it will probably rank extremely well. Such is the power of a domain that enjoys authority and trust in search engines.

Trust can also be established via inbound links. Few duplicate content and a few suspicious links are much more likely to be ignored if your site has earned hundreds of links from high-quality editorial sources.

The value of the individual page is calculated partly on the basis of its uniqueness and visitor experience. Similarly, the value of the entire domain is evaluated. Sites that serve mostly non-unique, uncensored content may be unable to rank even if the site is well-optimized. Search engines just do not want thousands of copies of Wikipedia to fill their indexes and to prevent this, use algorithmic and manual methods of review.

The value of the individual page is calculated partly on the basis of its uniqueness and visitor experience. Similarly, the value of the entire domain is evaluated. Sites that serve mostly non-unique, uncensored content may not be able to rank even if the site is well-optimized. Search engines just do not want thousands of copies of Wikipedia to fill their indexes and prevent this, using algorithmic and manual methods of review.