Relevance and Other Search Engine Ranking Factors

This is the third part of a five-part series on search engine optimization factors. In this part, we will cover topical relevance of links to a site, a site’s relevance to the search query, whether Google manually awards rank to some sites, and more. We will also take a look at some factors that can negatively affect your site’s ranking in the search engine results pages (SERPs), and finish up with an in-depth look at different kinds of links.

Topical Relevance of Links to a Site

Does it matter if inbound links are topically related? If it doesn’t I’ve wasted 14 years of my life. – Eric Ward

That’s a bold statement.

It’s not likely that Google can spot all links that are topically related, but it’s one of the most important factors in today’s SEO. The topic, however, does not have to be a 100% exact match. A link from a site about credit cards to one about mortgages is still in the same space.

Also analyzed are the page topic, headlines, and text surrounding the anchor text.

Site Relevancy to the Search Query

Some examples are counterproductive: Wikipedia and About.com, which rank for all terms and cover just about everything except porn.

Site relevancy is more important for smaller websites that have less link popularity, because relevancy, after all, is the goal of all search engines. This is especially true now since the defusing of Google Bombs.

The Google Bomb is a technique that ranked websites for terms irrelevant to the content. The best example of a Google bomb is former US president George Bush’s ranking for the keywords “miserable failure,” which led to his official biography located on whitehouse.gov. Google bombing is as simple as getting links from various websites with targeted anchor text. In the Bush example, links to his biography page had the anchor text “miserable failure.”

Google defused this by analyzing the relationship of the anchor text to the page being linked. If the two don’t match, the link is discounted.

Going back to the topic of site relevancy to the search query: page relevancy is key, while site relevancy is less important if there is link popularity making up for it. Examples are About.com and Wiki. On the other hand, a razor-focused niche website definitely gets extra kudos.

Manual Site Ranking

To reframe a subheading: “Does Google somehow manually award websites with good rankings?”

There’s a lot of speculation about this, but no real answers. Many quickly point to Wikipedia as proof, but we will never know for sure until someone comes clean from Google. My guess is that it makes sense to have this mechanism in place. Maybe Sergei, Larry and Eric have the switch… like a special briefcase to nuclear weapons that presidents carry.

Low Quality External Links

You cannot control it if spammers link to you, but you have 100 percent control over external links. Links to link farms and spam networks can get you flagged as a part of the same neighborhood. Be careful who you link to.

Keyword Stuffing

Once a certain threshold is reached, a site may be flagged for review or penalized. The technique is somewhat useless if you want to rank for one phrase, but it’s a different thing if you’re stuffing combinations designed to rank for multiple phrases:

LA lawyer, lawyer in LA, new York lawyer, California lawyer, Michigan lawyer, Washington lawyer, Toronto lawyer, etc…

The best way to stuff keywords without being detected is do it naturally. Research 10 – 20 related key phrases and include them on the page as separate H headlines (or part of headlines). Keywords must have relatively low search volume to make this work.

Keyword-stuffed pages also suck to read. The search engines put much more weight on links than on keyword density.

Server Can’t be Accessed by Robots

This is pretty straightforward. If your host is down, why should the search engines send visitors to your site?

Duplicate Content

Google is much more forgiving of duplicate content if a site has authority status. It generally wants one copy per search result, but this is not always true.

There’s no problem if you feature quotes or very occasionally copy and post entire articles. Problems start when a site is a duplicate version of scraped content from around the web or when most pages are duplicate content. This is true even on search results. Many good sites rank with duplicate articles, but make up for it by having lots of original content.

SEO Chat did a test a while ago (I can’t find the link, sorry) to see which search engine spots the source and gives credit to the original creator of the article. Believe it or not, Microsoft Live Search was best at this. I think Google is more concerned about keeping out sites that are entirely duplicates.

Old sites with many trusted links can get away with spam links. Old sites in general can get away with much more crap than the newer ones, especially if the old sites are big corporations actively spending dough on AdWords.

If a site is new and only has spam links it will be stuck in the “low trust hole” for a long time. If the site has a healthy “good links to bad links ratio,” then spam links won’t affect it as much. The system uses a negative computation where 1 quality link may be discounted with a specific number of spam links (for example 100).

Google says that competitors can’t hurt your rankings, which is not entirely true. Many SEOs can prove otherwise with spam links and other techniques. If you know what you’re doing, according to some, you can bump down competitors. I have never seen it, but I have heard some people say they do it.

I would personally love to try dropping powerful competitors, like some large corporations, out of the top ranks. Aaron Wall of SEObook.com constantly states that corporations that spend millions on AdWords are allowed to engage in a lot of black hat practices. My guess would be that Aaron either tried to bump them down with no results, or did a lot of spam SEO on their behalf.

Link Selling/Buying

This is bad if you get spotted. It works if you can get away with it,  but Google is getting better and better at spotting it.

Exact Title Tags Throughout Pages

These can discourage spiders from spidering an entire website. According to Aaron Wall, Chris Boggs and Marcus Tandler, it’s the fastest way to get your site included in the supplemental index.

The supplemental index is a backup index where Google puts all the “crap.” It serves results from the supplemental index only when there are not enough results in the main index.

Make sure you have unique title tags targeting different keywords throughout your website. Duplicate titles also miss a lot of long tail keywords, which send a lot of traffic.

Image vs Text Links

Alt text is treated in a manner similar to the anchor text in image links.

Experts disagree on this one. Some state that text links are more valuable than image links, other say the opposite.

I would personally go for a text link if given a choice. We also have to consider the fact that not everyone uses the ALT attribute in image links. Webmasters clueless or careless about SEO may ignore ALT tags, and this is something that search engineers have to consider.

Image size also plays a role. Huge images may have more importance than smaller ones (this is just a guess).

Number of links from a Single Domain

There are a lot of arguments on this topic. Sitewide links still do the work, but may be discounted in some cases. A lot of purchased sitewides are placed in the footer, so this may be a sign of link buy to Google.

I know an authoritative domain that has plenty of sitewide links from unrelated websites that does just fine in search results.

The authority and trust factor of the linking site matters a lot. Sitewide links also pass more overall page rank (due to a higher amount of pages), but that flow might be blocked to prevent unnatural SERP influence.

Home Page vs Deep Links

Both are important. We can’t say that home page links are more important than deep links, nor can we state otherwise. Home page and deep links are part of the overall SEO strategy. Make sure to get enough of both.

Location of link on page

Footer links tend to have less value than purely editorial links within content. Blogrolls may be devalued a bit as well. We’ve covered Microsoft research in this area, so it’s a good guess that search engines take link position as a cue. This is even more true in the anti-paid-link web economy, where Google is doing everything to detect paid links.

It also depends on the authority of the site linking out.

One-Way Links and Reciprocals

One-way links are more valuable than reciprocals. One-way links indicate a vote, while reciprocals hint at manipulation. Focus on one-way links as your strategy.

Reciprocals were abused in the past, so search engine place far less weight on them. I think it’s okay to exchange links with good partners and very related sites, but on a limited scale.

If you are going to exchange, link out to deep pages rather than a generic “partners” page.

Blog networks tend to link within themselves a lot, and Google seems to be okay with it, because it’s natural. If site A mentions X in an article and X mentions A in an article as well – is that a reciprocal link? Or a natural link? I think the latter is true.

In the next article we will continue to look at links, source code, the amount of out bound links, total links on page, links to content ratio, URL and much more.

Google+ Comments

Google+ Comments