How does Google Pagerank Algorithm work

how content marketing builds your business and google pagerank algorithm linear algebra and google pagerank algorithm pdf
RichardGibbs Profile Pic
RichardGibbs,United Kingdom,Professional
Published Date:04-08-2017
Your Website URL(Optional)
C H A P T E R S E V E N Content Marketing In today’s search environment, the main driving factors are now what we generally refer to as social proof signals, such as inbound links (e.g., within a blog post) and user engagement with your content (e.g., time spent watching your video). As you will see in Chapter 8, social signals such as retweets, likes, and pins don’t appear to have a direct ranking impact, and Google+ appears to have an impact, but only from a per- sonalized search perspective. For many years, links to a website were the single largest factor in determining its search engine rankings, because links generally (before they became a tool for SEO manipulation) existed to provide a pathway for a site’s users to find additional, rele- vant content on a third party’s website—a “signal” that the owner of the linking site deemed the third party’s linked content valuable. Because of the power of this signal, many SEO professionals pursued obtaining links to their sites or their client’s sites without worrying about the quality of the site where those links resided. Unfortunately, many link-building efforts and services spawned by this behavior had little integration with the rest of the publisher’s content develop- ment and marketing strategies. Clearly, this violated the spirit of what the search engines were measuring and placing value on—links that act as valid endorsements for third-party content. As a result, the search engines, and Google in particular, have taken many steps to force website own- ers and publishers to view link building more holistically, as an “earned” engagement rather than a “purchased” endorsement, requiring a renewed focus on links as a meas- urement of content quality. This shift, both necessary and welcomed, reestablishes the need for quality content development (as the “earner” of links) to be integrated with the overall PR and marketing strategy for businesses. The development of highly shareable content, and the promotion of that content via various channels for increased business visibility, is generally referred to as content mar- keting. Content can be published on your own site, other people’s sites, or in social 419 media, but in all cases acts to build visibility for your brand online. The most valuable content is usually highly relevant to what you do, solves problems for others or stirs their emotions, and is often noncommercial in nature. Links remain a large factor in search engine ranking algorithms, but we use content marketing to build our reputation and visibility online, and as a result we obtain organic links of the highest possible quality—links that would be desirable for your business even if the search engines did not exist, and that people might actually click on to engage with your business. The most important thing to remember as you delve into this chapter is that the pri- mary goal of any content marketing effort should be enhancing the reputation of your business. Any campaign that starts with “getting links” as the objective, without plac- ing primary and ongoing focus on the quality and value of the content being linked to, will eventually run into problems (if it hasn’t already; see Chapter 9). During a 2012 1 interview, Google’s Matt Cutts and Eric Enge had the following exchange: Eric Enge: It dawned on me recently that link building is an interesting phrase that has misled people. It is a bit of a “cart before the horse” thing. It has led people to think about links as something they get from the “dark corners of the Web.” Places where no one ever goes, so it does not matter what you do there. So by thinking of it this way, as link building, you are off on the wrong foot even before you get started. Matt Cutts: That’s right. It segments you into a mindset, and people get focused on the wrong things. It leads them to think about links as the end goal. It is important to think about producing something excellent first. If you have an outstanding product, world-class content, or something else that sets you apart, then you can step back and start thinking about how to promote it. There are many who believe that social signals and user engagement with your con- tent have become important ranking factors. However, the impact of social media appears to be quite limited: • Google+ can have a strong impact on personalized search within Google for those who are active on the Google+ platform. • Search engines may use shared content on social media platforms as a way of dis- covering new content—in particular, news-related content. These new ranking factors will be discussed in greater detail in Chapter 8. 1 “Matt Cutts and Eric Talk About What Makes a Quality Site,” Stone Temple Consulting, July 9, 2012, CHAPTER SEVEN: CONTENT MARKETING 420 How Links Historically Influenced Search Engine Rankings The concept of using links as a way to measure a site’s importance was first made pop- ular by Google with the implementation of its PageRank algorithm (others had previ- ously written about using links as a ranking factor, but Google’s rapidly increasing user base popularized it). In simple terms, each link to a web page is a vote for that page. But it’s not as simple as “the page with the most votes wins.” Links and linking pages are not all created equal. Some links are weighted more heavily by Google’s PageRank algorithm than others. The key to this concept is the notion that links represent an “editorial endorsement” of a web document. Search engines rely heavily on editorial votes. However, as publish- ers learned about the power of links, some started to manipulate links through a vari- ety of methods. This created situations in which the intent of the link was not editorial in nature, and led to many algorithm enhancements. To help you understand the origins of link algorithms, the underlying logic of which is still in force today, let’s take a look at the original PageRank algorithm in detail. The Original PageRank Algorithm The PageRank algorithm was built on the basis of the original PageRank thesis auth- ored by Sergey Brin and Larry Page while they were undergraduates at Stanford Uni- 2 versity. In the simplest terms, the paper states that each link to a web page is a vote for that page. However, as stated earlier, votes do not have equal weight. So that you can bet- ter understand how this works, we’ll explain the PageRank algorithm at a high level. First, all pages are given an innate but tiny amount of PageRank, as shown in Figure 7-1. Figure 7-1. Some PageRank for every page 2 Sergey Brin and Lawrence Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” HOW LINKS HISTORICALLY INFLUENCED SEARCH ENGINE RANKINGS 421 Pages can then increase their PageRank by receiving links from other pages, as shown in Figure 7-2. Figure 7-2. Pages receiving more PageRank through links How much PageRank can a page pass on to other pages through links? That ends up being less than the page’s PageRank. In Figure 7-3 this is represented by f(x), meaning that the passable PageRank is a function of x, the total PageRank. In 2009, Matt Cutts wrote a post in which he suggested that a page might be able to vote 85–90% of its 3 PageRank. Figure 7-3. Some of a page’s PageRank is passable to other pages If this page links to only one other page, it passes all of its passable PageRank to that page, as shown in Figure 7-4, where Page B receives all of the passable PageRank of Page A. 3 Matt Cutts, “PageRank Sculpting,” Matt Cutts: Gadgets, Google, and SEO, June 15, 2009, https:// CHAPTER SEVEN: CONTENT MARKETING 422 Figure 7-4. Passing of PageRank through a link However, the scenario gets more complicated because pages will link to more than one other page. When that happens the passable PageRank gets divided among all the pages receiving links. We show that in Figure 7-5, where Page B and Page C each receive half of the passable PageRank of Page A. Figure 7-5. How PageRank is passed In the original PageRank formula, link weight is divided equally among the number of links on a page. This undoubtedly does not hold true today, but it is still valuable in understanding the original intent. Now take a look at Figure 7-6, which depicts a more complex example that shows PageRank flowing back and forth between pages that link to one another. HOW LINKS HISTORICALLY INFLUENCED SEARCH ENGINE RANKINGS 423 Figure 7-6. Cross-linking between pages Cross-linking makes the PageRank calculation much more complex. In Figure 7-6, Page B now links back to Page A and passes some PageRank, f(y), back to Page A. Figure 7-7 should give you a better understanding of how this affects the PageRank of all the pages. Figure 7-7. Iterative PageRank calculations The key takeaway here is that when Page B links to Page A to make the link recipro- cal, the PageRank of Page A (x) becomes dependent on f(y), the passable PageRank of Page B, which happens to be dependent on f(x) In addition, the PageRank that Page A passes to Page C is also impacted by the link from Page B to Page A. This makes for a very complicated situation where the calculation of the PageRank of each page on the Web must be determined by recursive analysis. CHAPTER SEVEN: CONTENT MARKETING 424 We have defined new parameters to represent this: q, which is the PageRank that accrues to Page B from the link that it has from Page A (after all the iterative calcula- tions are complete); and z, which is the PageRank that accrues to Page A from the link that it has from Page B (again, after all iterations are complete). The scenario in Figure 7-8 adds further complexity by introducing a link from Page B to Page D. In this example, Pages A, B, and C are internal links on one domain, and Page D represents a different site (shown as Wikipedia). In the original PageRank for- mula, internal and external links passed PageRank in exactly the same way. This became exposed as a flaw because publishers started to realize that links to other sites were “leaking” PageRank away from their own site, as you can see in Figure 7-8. Figure 7-8. PageRank being leaked So, for example, because Page B links to Wikipedia, some of the passable PageRank is sent there, instead of to the other pages that Page B is linking to (Page A in our exam- ple). In Figure 7-8, we represent that with the parameter w, which is the PageRank not sent to Page A because of the link to Page D. The PageRank “leak” concept represented a fundamental flaw in the algorithm. Once page creators investigated PageRank’s underlying principles, they realized that linking out from their own sites would cause more harm than good. If a great number of web- sites adopted this philosophy, it could negatively impact the “links as votes” concept and actually damage the quality of Google’s algorithm. Needless to say, Google quickly corrected this flaw to its algorithm. As a result of these changes, you no longer need to worry about PageRank leaks. Quality sites should link to other relevant quality pages around the Web. Even after these changes, internal links from pages still pass some PageRank, so they still have value, as shown in Figure 7-9. HOW LINKS HISTORICALLY INFLUENCED SEARCH ENGINE RANKINGS 425 Figure 7-9. Internal links still passing some PageRank Google has continuously changed and refined the way it uses links to impact rankings, and the current algorithm is not based on PageRank as it was originally defined. How- ever, familiarity and comfort with the original algorithm are certainly beneficial to those who practice optimization of Google results. All link-based algorithms are built on the assumption that for the most part the links received are legitimate endorsements by the publisher implementing a link to your website. The person implementing the link should be doing it because he feels he is linking to a great resource that would be relevant to visitors on his website. In an ideal world, links would be similar to the academic citations you find at the end of a scientist’s published paper, where she cites the other works she has referenced in putting together her research. If the publisher implementing the link is compensated for doing so, the value of the link to a search engine is diminished, and such links can be harmful to search engine algorithms. Note that compensation can come in the form of money or special consid- erations, and we will explore this more in this chapter. Additional Factors That Influence Link Value Classic PageRank isn’t the only factor that influences the value of a link. In the follow- ing subsections, we discuss some additional factors that influence the value a link passes. CHAPTER SEVEN: CONTENT MARKETING 426Anchor text Anchor text refers to the clickable part of a link from one web page to another. As an example, Figure 7-10 shows a snapshot of a part of the Quicken Loans home page. Figure 7-10. Anchor text: a strong ranking element The anchor text for the first link in the list of Popular pages in Figure 7-10 is Refinanc- ing. The search engine uses this anchor text to help it understand what the page receiving the link is about. As a result, the search engine will interpret the link as say- ing that the page receiving the link is about refinancing, and therefore rank the page higher in the search results for that search query. At one time, anchor text was so powerful that SEOs engaged in a practice called Google bombing—the idea that if you link to a given web page from many places with the same anchor text, you can get that page to rank for queries related to that anchor text, even if the page is unrelated and didn’t even include any of the words in the query. One notorious Google bomb was a campaign that targeted the biogra- phy page for George W. Bush with the anchor text miserable failure. As a result, that page ranked 1 for searches on miserable failure until Google tweaked its algorithm to reduce the effectiveness of this practice. Google bombing was not the worst consequence of the power of anchor text. The use of anchor text as a ranking factor is useful in the search algorithms only if the person implementing the link naturally chooses what to use; if he is compensated for using specific anchor text, the value of the link as a ranking signal is negatively impacted. To make matters worse, SEOs started to abuse the system and started implementing link-building programs designed around anchor text to drive their rankings. As a result, publishers that did not pursue these types of link-building campaigns were at a severe disadvantage. Ultimately, this started to break down the notion of links as valid academic citations, and Google began to take action. In early 2012, Google began to send publishers “unnatural link” warnings through Google Search Console, and on April 24, 2012, Google released the first version of its Penguin algorithm. These topics are discussed more in Chapter 9. HOW LINKS HISTORICALLY INFLUENCED SEARCH ENGINE RANKINGS 427 Anchor text remains an important part of search algorithms, but now the search engines look for unnatural patterns of anchor text (too much of a good thing) and are lowering the rankings for publishers that exhibit patterns of articial fi ly influencing the anchor text people use in links to their website. Relevance Links that originate from sites/pages on the same topic as the publisher’s site, or on a closely related topic, are worth more than links that come from a site on an unrelated topic. Think of the relevance of each link being evaluated in the specicfi context of the search query a user has just entered. So, if the user enters used cars in Phoenix and the publisher has received a link to its Phoenix used cars page that is from the Phoenix Chamber of Commerce, that link will reinforce the search engine’s belief that the page really does relate to Phoenix. Similarly, if a publisher has another link from a magazine site that has done a review of used car websites, this will reinforce the notion that the site should be considered a used car site. Taken in combination, these two links could be powerful in helping the publisher rank for used cars in Phoenix. Authority Authority has been the subject of much research. One of the more famous papers, written by Apostolos Gerasoulis and others at Rutgers University and titled “DiscoWeb: Applying Link Analysis to Web Search,” became the basis of the Teoma algorithm, 4 which was later acquired by AskJeeves and became part of the Ask algorithm. What made this algorithm unique was its focus on evaluating links on the basis of their relevance to the linked page. Google’s original PageRank algorithm did not incor- porate the notion of topical relevance, and although Google’s algorithm clearly does this today, Teoma was in fact the first search engine to offer a commercial implementa- tion of link relevance. Teoma introduced the notion of hubs, which are sites that link to most of the impor- tant sites relevant to a particular topic, and authorities, which are sites that are linked to by most of the sites relevant to a particular topic. The key concept here is that each topic area that a user can search on will have authority sites specific to that topic area. The authority sites for used cars are different from the authority sites for baseball. 4 Brian. D. Davison et al., “DiscoWeb: Applying Link Analysis to Web Search,” http:// CHAPTER SEVEN: CONTENT MARKETING 428 Refer to Figure 7-11 to get a sense of the difference between hub and authority sites. Figure 7-11. Hubs and authorities So, if the publisher has a site about used cars, it should seek links from websites that the search engines consider to be authorities on used cars (or perhaps more broadly, on cars). However, the search engines will not tell you which sites they consider authoritative—making the publisher’s job that much more difficult. The model of organizing the Web into topical communities and pinpointing the hubs and authorities is an important one to understand (read more about it in Mike Gre- 5 han’s paper “Filthy Linking Rich” The best link builders understand this model and leverage it to their benefit. Trust Trust is distinct from authority. On its own, authority doesn’t sufficiently take into account whether the linking page or the domain is easy or difficult for spammers to infiltrate, or the motivations of the person implementing the link. Trust, on the other hand, does. Evaluating the trust of a website likely involves reviewing its link neighborhood to see what other trusted sites link to it. More links from other trusted sites would convey more trust. In 2004, Yahoo and Stanford University published a paper titled “Combating Web 6 Spam with TrustRank.” The paper proposed starting with a trusted seed set of pages (selected by manual human review) to perform PageRank analysis, instead of a ran- dom set of pages as was called for in the original PageRank thesis. 5 Mike Grehan, “Filthy Linking Rich”, 6 Zoltán Gyöngyi, Hector Garcia-Molina, and Jan Pedersen, “Combating Web Spam with Trust- Rank,” Proceedings of the 30th VLDB Conference, Toronto, Canada, 2004, rank_paper. HOW LINKS HISTORICALLY INFLUENCED SEARCH ENGINE RANKINGS 429 Using this tactic removes the inherent risk in using a purely algorithmic approach to determining the trust of a site, and potentially coming up with false positives or negatives. The trust level of a site would be based on how many clicks away it is from seed sites. A site that is one click away accrues a lot of trust; two clicks away, a bit less; three clicks away, even less; and so forth. Figure 7-12 illustrates the concept of TrustRank. Figure 7-12. TrustRank illustrated The researchers of the TrustRank paper also authored a paper describing the concept of 7 spam mass. This paper focuses on evaluating the effect of spammy links on a site’s (unadjusted) rankings. The greater the impact of those links, the more likely the site itself is spam. A large percentage of a site’s links being purchased is seen as a spam indicator as well. You can also consider the notion of reverse TrustRank, where linking to spammy sites will lower a site’s TrustRank. It is likely that Google and Bing both use some form of trust measurement to evaluate websites. It is probably done by different means than outlined in the TrustRank and spam mass papers, and it may be incorporated into the methods they use for calculat- ing authority, but nonetheless, trust is believed to be a significant factor in rankings. For SEO practitioners, getting measurements of trust can be difficult. Currently, moz- Trust (from Moz’s Open Site Explorer) and TrustFlow (from Majestic SEO) are the most well-known publicly available metrics tools for evaluating a page’s trust level. How Search Engines Use Links The search engines use links primarily to discover web pages, and to count the links as votes for those web pages. But how do they use this information once they acquire it? Let’s take a look: 7 Zoltán Gyöngyi, Pavel Berkhin, Hector Garcia-Molina, and Jan Pedersen, “Link Spam Detection Based on Mass Estimation,” October 31, 2005, CHAPTER SEVEN: CONTENT MARKETING 430Index inclusion Search engines need to decide which pages to include in their index. Crawling the Web (following links) is one way they discover web pages (another is through the use of XML sitemap files). In addition, the search engines do not include pages that they deem to be of very low value, because cluttering their index with those pages will not lead to a good user experience. The cumulative link value, or link authority, of a page is a factor in making that decision. Crawl rate/frequency Search engine spiders go out and crawl a portion of the Web every day. This is no small task, and it starts with deciding where to begin and where to go. Google has publicly indicated that it starts its crawl in reverse PageRank order. In other words, it crawls PageRank 10 sites first, PageRank 9 sites next, and so on. Higher Pag- eRank sites also get crawled more deeply than other sites. It is likely that Bing starts its crawl with the most important sites first as well. This would make sense, because changes on the most important sites are the ones the search engines want to discover first. In addition, if a very important site links to a new resource for the first time, the search engines tend to place a lot of trust in that link and want to factor the new link (vote) into their algorithms quickly. In June 2010, Google released Caffeine, an update to its infrastructure that greatly increased its crawling capacity and speed, but being higher in the crawl priority queue still matters. Ranking Links play a critical role in ranking. For example, consider two sites where the on- page content is equally relevant to a given topic. Perhaps they are the shopping sites and (the less popular) (not a real site). The search engine needs a way to decide who comes out on top: Amazon or Joe. This is where links come in. Links help cast the deciding vote. If more sites and more important sites link to Amazon, it must be more important, so it is more likely to rank higher than Joe’s Shopping Site. Further Refining How Search Engines Judge Links Many aspects are involved in evaluating a link. As we just outlined, the most com- monly understood ones are authority, relevance, trust, and the role of anchor text. However, other factors also come into play, as we’ll discuss in this section. Additional Link Evaluation Criteria In the following subsections, we discuss some of the more important factors search engines consider when evaluating a link’s value. FURTHER REFINING HOW SEARCH ENGINES JUDGE LINKS 431Source independence A link from your own site back to your own site is, of course, not an independent edi- torial vote for your site. Put another way, the search engines assume that you will vouch for your own site. Think about your site as having an accumulated total link authority based on all the links it has received from third-party websites, and your internal linking structure as the way you allocate that authority to pages on your site. Your internal linking struc- ture is incredibly important, but it does little if anything to build the total link author- ity of your site. In contrast, links from a truly independent source carry much more weight. Extending this notion a bit, say you have multiple websites. Perhaps they have common data in the Whois records (such as the name servers or contact information). Search engines can use this type of signal to treat cross-links between those sites more like internal links than inbound links earned by merit. Even if you have completely different Whois records for your websites but they all cross-link to each other, the search engines can detect this pattern easily. Keep in mind that a website with no independent third-party links into it has no link power to vote for other sites. If the search engine sees a cluster of sites that heavily cross-link and many of the sites in the cluster have no or few incoming links to them, the links from those sites may well be ignored. Conceptually, you can think of such a cluster of sites as a single site. Cross-links to them can be algorithmically treated as internal links, with links between them not adding to the total link authority score for any of the sites. The cluster would be evalu- ated based on the inbound links to the cluster as a whole. Of course, there are many different ways to implement such a cluster, but keep in mind that there’s no SEO value in building a large number of sites just to cross-link them with each other. Links across domains Obtaining an editorially given link to your site from a third-party website is usually a good thing. But if more links are better, why not get links from every page of these sites if you can? In theory, this is a good idea, but search engines do not necessarily count multiple links from a domain cumulatively. When Google first came out, its link-based algorithm was revolutionary. As spammers studied the PageRank algorithm, they realized that every page on the Web naturally had a small amount of innate PageRank. It was not long before spammers realized that CHAPTER SEVEN: CONTENT MARKETING 432 they could create a website with more than a million pages, even if they had weak content, and then use every page of that website to link to the most important page on that site (known as a sitewide link) or even a page on a different site. Alternatively, they simply bought sitewide links in the footer of very large sites. In the early days of Google, this worked really well. As Google learned what was hap- pening, however, it realized that multiple links on one site most likely represent one editorial vote (whereas multiple sites with one link apiece likely represent multiple editorial decisions). As a result, Google put a dampener on the incremental value of more than one link from a site, and thus each incremental link from a site began to pass slightly less value. In addition, over time, Google became more active in penalizing sites that use bad link-building practices, such as buying links, a tactic that is often indicated by the use of sitewide links. This meant that a sitewide link could potentially harm your site. Link builders and spammers figured this out and adapted their strategies. They began to focus on obtaining links on as many different domains as possible. This particularly impacted a strategy known as guest posting (this is discussed more in “Guest Posting” on page 470), which is the concept of writing an article for another website and getting it to publish the article on its blog. Although guest posting is a legitimate content marketing strategy when used properly, spammers abused it as well. A brief thought experiment illustrates the problem. Imag- ine that your market space includes a total of 100 websites. Perhaps 3 of these are high quality, another 5 are pretty high quality, 12 more are respectable, and so forth, as shown in Figure 7-13. As you can see in Figure 7-13, even if your first posts go on the very best sites in your market, by the time you have done 66 posts you are writing posts on genuinely bad sites. It does not make sense for Google to treat this content as more valuable than an ongoing relationship with the high-authority sites in your market. In the case of guest posts, as well as many other content marketing strategies, you are far better served to obtain a smaller number of guest post placements on higher authority sites, and in fact, get repeat links from those sites with ongoing posts. In fact, don’t “guest post,” but seek the more stable “regular contributor” status on respected blogs. We will discuss this topic more later in this chapter. FURTHER REFINING HOW SEARCH ENGINES JUDGE LINKS 433 Figure 7-13. Mapping site quality in a market space Source diversity Getting links from a range of sources is also a significant factor in link evaluation. We already discussed two parts of this: getting links from domains you do not own (versus from many different domains), and getting links from many domains (versus getting multiple links from one domain). However, there are many other aspects to consider. For example, if all your links come from blogs that cover your space, you have poor source diversity. You can easily think of other types of link sources: national media websites, local media websites, sites that are relevant but cover more than just your space, university sites with related degree programs, and so on. You can think about implementing content marketing campaigns in many of these dif- ferent sectors as diversification. There are several good reasons for diversifying. One reason is that the search engines value this type of diversification. If all your links come from a single class of sites (e.g., blogs), this is more likely to be the result of manipulation, and search engines do not like that. If you have links coming in from multiple types of sources, search engines view it as more likely that you have content of value. CHAPTER SEVEN: CONTENT MARKETING 434 Another reason is that search engines are constantly tuning and tweaking their algo- rithms. If all your links come from blogs and the search engines make a change that significantly reduces the value of blog links, that could really hurt your rankings. You would essentially be hostage to that one strategy, and that’s not a good idea either. It is a good idea to evaluate your source diversity compared to your competitors. Figure 7-14 shows an example of this using Figure 7-14. Comparing link diversity against competition Temporal factors Search engines also keep detailed data on when they discover a new link, or the disap- pearance of a link. They can perform quite a bit of interesting analysis with this type of data. Here are some examples: When did the link first appear? This is particularly interesting when considered in relation to the appearance of other links. Did it happen immediately after you received that link from the New York Times? When did the link disappear? Some of this is routine, such as links that appear in blog posts that start on the home page of a blog and then get relegated to archive pages over time. However, FURTHER REFINING HOW SEARCH ENGINES JUDGE LINKS 435 if a link to your site disappears shortly after you made major changes to your site that could be seen by the search engines as a negative signal; they might assume that the link was removed because the changes you made lowered the relevance or quality of the site. How long has the link existed? A search engine can potentially count a link for more, or less, if it has been around for a long time. Whether it’s counted for more or less could depend on the authority/trust of the site providing the link, or other factors. How quickly were the links added (also known as link velocity)? Drastic changes in the rate of link acquisition could also be a significant signal. Whether it is a bad signal or not depends. For example, if your site is featured in major news coverage, it could be good. If you start buying links by the thousands, it would be bad. Part of the challenge for the search engines is to determine how to interpret the signal. Context/relevance Although anchor text is a major signal regarding the relevance of a web page (though receiving greater scrutiny since March 2014), search engines look at a much deeper context than that. They can look at other signals of relevance, such as: External links to the linking page Does the page containing the link to your site have external links as well? If the page linking to your site is benefiting from links from third-party sites, this will make the link to your site more valuable. Nearby links Do the closest links on the page point to closely related, high-quality sites? That would be a positive signal to the engines, as your site could be seen as high- quality by association. Alternatively, if the two links before yours are for Viagra and a casino site, and the link after yours points to a porn site, that’s not a good signal. Page placement Is your link in the main body of the content? Or is it off in a block of links at the bottom of the right rail of the web page? Better page placement can be a ranking factor. This is also referred to as prominence, and it applies in on-page keyword CHAPTER SEVEN: CONTENT MARKETING 436 location as well. Google has a patent that covers this concept called the Reason- 8 able Surfer Patent. Nearby text Does the text immediately preceding and following your link seem related to the anchor text of the link and the content of the page on your site that it links to? If so, that could be an additional positive signal. This is also referred to as proximity. Closest section header Search engines can also look more deeply at the context of the section of the page where your link resides. This can be the nearest header tag, or the nearest text highlighted in bold, particularly if it is implemented like a header (two to four boldface words in a paragraph by themselves). Overall page context The relevance and context of the linking page are also factors in rankings. If your anchor text, surrounding text, and the nearest header are all related, that’s good. If the overall context of the linking page is also closely related, that’s better still. Overall site context Another signal is the context of the entire site that links to you (or perhaps even just the section of the site that links to you). For example, if a site has hundreds of pages that are relevant to your topic and links to you from a relevant page, with relevant headers, nearby text, and anchor text, these all add to the impact, so the link will have more influence than if the site had only one page relevant to your content. Source TLDs It is a popular myth that links from certain top-level domains (TLDs), such as .edu, .gov, and .mil, are inherently worth more than links from other TLDS such as .com, .net, and others, but it does not make sense for search engines to look at the issue so simply. Matt Cutts, the former head of the Google webspam team, commented on this in an 9 interview with Stephan Spencer: There is nothing in the algorithm itself, though, that says: oh, .edu—give that link more weight. 8 For a full analysis of the Reasonable Surfer Patent, check out Bill Slawski’s May 11, 2010, blog post “Google’s Reasonable Surfer: How The Value of a Link May Differ Based Upon Link and Document Features and User Data” on SEO by the Sea, 9 Stephan Spencer, “Interview with Google’s Matt Cutts at Pubcon,” January 31, 2008, http:// FURTHER REFINING HOW SEARCH ENGINES JUDGE LINKS 437And: You can have a useless .edu link just like you can have a great .com link. There are many forums, blogs, student pages, and other pages on .edu domains that spammers might be able to manipulate to gain links to their sites. For this reason, search engines cannot simply imbue a special level of trust or authority to a site because it is an .edu domain. To prove this, simply search for buy viagra site:edu; you’ll quickly see how spammers have infiltrated .edu pages. However, it is true that .edu domains are often authoritative. But this is a result of the link analysis that defines a given college or university as a highly trusted site on one or more topics. The result is that there can be (and are) domains that are authoritative on one or more topics on some sections of their site, yet have other sections of their site that offer much less value or that spammers are actively abusing. Search engines deal with this problem by varying their assessment of a domain’s authority across the domain. The publisher’s section may be considered authoritative on the topic of used cars, but might not be authoritative on the topic of new cars. Ultimately, every site gets evaluated for the links it has, on a topic-by-topic basis. Fur- ther, each section and page of a site also get evaluated on this basis. A high-quality link profile gives a page more authority on a given topic, making that page likely to rank higher on queries for that topic, and providing that page with more valuable links that related websites could then link to. Link and document analysis combine and overlap, resulting in hundreds of factors that can be individually measured and filtered through the search engine algorithms (the set of instructions that tells the engines what importance to assign to each factor). The algorithms then determine scoring for the documents and (ideally) list results in decreasing order of relevance and importance (rankings). How Search Engines Determine a Link’s Value A smart content marketing campaign typically starts with research into which sites would provide the best visibility and reputation benefits for the publisher. However, it may also be useful to have an understanding of how search engines place value on a link. Although there are many metrics for evaluating a link, as previously discussed, many of those data items are hard to determine (e.g., when a link was first added to a site) for an individual content marketer. Here we outline an approach that you can use today, with not too much in the way of specialized tools. The factors you can look at include: • The relevance of the linking page and of the linking domain to your site. CHAPTER SEVEN: CONTENT MARKETING 438

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.