Search Engine Myths Debunked

Search Engine OptimizationThe most striking thing I’ve found in my experience with search engines is not the fact that people are so consistently misinformed, though that certainly is a remarkable fact.  The most shocking I’ve found is that people are always wrong about the same things.

Hopefully we can dispel some of the common myths of search engines and search engine optimization to get those people on the right track!


So, without further ado, here are the myths!


Meta tags are important for rankings


People who believe this one tend to be those who remember the days of old, when meta tags, hidden text, and <!– Comment Spamming –> were common.


Those people tend to forget, though, that search engines have become significantly more accurate since then, and there’s a reason.  Search engines have realized the key to good results is eliminating factors that can be too easily manipulated by site owners and Webmasters.


In that vein, meta keyword tags have lost nearly all value; in fact, only Inktomi still factors them into their ranking algorithm, and they have admitted that their value is all but negligible.


If you want to test for yourself, there is an easy way to test: set up a Web page with meta keyword text not found anywhere in the body of the page.  Submit to Google, and then search in site: for the meta text.  You won’t get any results!


Description tags, similarly, do not hold much importance for ranking specifically, but they do have some value — search engines tend to include meta description text on results pages.


Now that we’ve established that meta tags aren’t important for ranking, I’ll recommend that you include them anyway.  They aren’t very important, but they sure can’t hurt, and they’re not much work to include.


Search engines can’t index dynamic content


For years, the major search engines failed to index dynamic content.  That’s changed, though!


Unfortunately, the ability to index dynamic content should not be confused with a tendency to index all dynamic content.  Search engines are hesitant when they run into query strings (after the “?” in a URL); they can be thrown into infinite loops by these types of pages, and, for the sake of their own resources and so they don’t bring servers to their knees, spiders don’t index all dynamic content.


It’s not a guessing game, though — by keeping query variables (e.g., ?var=value) short and few, you can increase spiders’ chances of indexing dynamic pages.


Also, avoid sessions; spiders won’t index pages with session IDs because these pages don’t really exist at any specific URL.  Spiders are smarter than you think; they know that!


To put it simply, don’t let search engines prevent you from building dynamic sites.  Just pay attention to my guidelines and you should be fine.


{mospagebreak title=Myths Three, Four, and Five}


Search engines will find every link to your site


Sometimes logic is the most valuable tool in the SEO’s toolbox.  Search engines don’t know every link that exists, and so they won’t index every link to your site that you know of.


That said, they won’t even index all the links found on pages indexed by search engines.  Spiders can’t follow certain links (such as JavaScript links), and that will affect the number of links counted by search engines.


Search engines will list every link to your site


Search engines don’t necessarily list all the links they find to a site.


In fact, they tend not to: Google, for instance, will only list links (a “link:” search will turn up sites that link to with PageRank above a certain threshold.  The exact algorithm varies is a complex one; for instance, if your only link is from a PR3 site, it will most likely be listed.  However, if you have 100 links and one is from a PR3 site, it most likely will not be listed.


Note, however, that there is a difference between listing linking sites and considering them in the ranking algorithm.  Many more sites will be factored in than will be listed.


Search engine bots will index your entire site at once


I’ve heard people tell me all too often that “Googlebot has left, and he (or she!) didn’t even finish indexing my site!”


Bots will typically take multiple visits to index your entire site, particularly if it has large amounts of content.  The first visit, also, isn’t likely to be a deep crawl; it will more likely be a quick “check” on the state of your site.


There are a few variables, for those whose curiosity I’ve piqued, that will affect the manner in which your site is indexed.  Sites with high PageRank will tend to be indexed more quickly and more thoroughly than sites with low PageRank.


I do concede that this policy represents something of a catch-22.  You can’t be indexed thoroughly until you have a high PageRank, and you may be unable to receive a high PageRank without a thorough indexing.


Also, search engine bots seem limited to a certain depth when indexing, at least at first.  Root pages ( will be indexed before deeper pages (e.g.,…).


{mospagebreak title=Myths Six, Seven, and Eight}


A PageRank of 0 means that you’ve been banned (Google-specific)


Nope, this is another myth — so there’s hope yet for those of you with PageRank 0 sites who fear the worst!


If you’ve recently been added to the Google index, you’ll likely start with a PageRank of 0.  (Note, also, that this is different from having no PageRank; a PageRank of 0 is represented by a white bar in the Google Toolbar and no PageRank is represented by a gray one.)


If you do have a PageRank of 0 and are concerned that you may have been penalized, look objectively at your site.  If you’ve dropped to a PageRank of 0 without a significant change in links or have a lower PageRank than your incoming links would suggest you deserve, you may have been banned.


Don’t worry, though; contacting Google or any other engine in question can get you unbanned!


Sites will be penalized for being listed in link farms.


Search engines are fair.  I would venture to say that there are no instances in which you’ll be penalized for something beyond your control.

It’s pretty logical, really, and it’s the reason sites won’t be castigated for linking to link farms.  You can’t control who links to you, and so you won’t be penalized for receiving links.


That said, operating a link farm is a different story.  Bob Massa found out for himself last year, when his SearchKing had its PageRank lowered by Google.  Check out SearchEngineWatch’s article “Google Sued over PageRank Decrease.”


PageRank is determined solely by the number of incoming links


This is something of a half-truth — the general concept is there, but a grasp of the specifics is not.


PageRank is determined by four things: the number of incoming links, the “quality” of incoming links (i.e., the PageRank of the sites linking to you), the number of outgoing links on a page linking to you, and the number of outgoing links on your site.


To understand this, let’s eliminate some variables and make some grossly over-generalized assumptions; let’s assume here that only one variable exists.  If all incoming links have the same PageRank and everything else remains the same, the higher the number of incoming links, the better.  If everything else remains constant, the higher the PageRank of sites linking to yours, the higher your PageRank will be.


Two more: if two sites with equal PageRank link to you, the one with fewer outgoing links will benefit you more.  And, finally, your PageRank will be higher if you link to fewer people; each link involves “sharing” some of your PageRank.




Ah, here we are!


Hopefully you picked up something along the way; we didn’t cover all the commonly held misconceptions that exist, but you should know enough to avoid some of the common errors made by explorers in the world of search engine optimization.


And remember — there are plenty of resources available!  Next time you find yourself not knowing something you’d like to know, don’t fall into the trap of believing the myth; do the research!


[gp-comments width="770" linklove="off" ]