Basic SEO: What Search Engines Hate

In my last article, I went over a few of the things that search engines love. Now, I’ll finish up the discussion with an article on what they absolutely hate, which includes everything from clutter to cookies. So if you’re in the mood to build a website, grab your pen and paper and start taking notes, because this article will offer some of the most important tricks of the trade.

A wise man once told me to kiss. No, not what you’re thinking. Not that there’s anything wrong with that. KISS – keep it simple stupid. This can be applied to almost any walk of life, especially SEO. And one way to tell if your website is properly optimized is by how cluttered it is. For a website to be cluttered, it has to have vast amounts of unnecessary code, not including the actual page content. If the text of a page is around five percent of the total source code, you might be cluttered my friend. Be on the lookout for excessive code used for JavaScripts, navigation bars, JavaScript event handlers, Flash animation, etc, especially if it’s above the page content, because it can prevent some search engines from reaching it.

One way to avoid clutter is to use external JavaScripts, instead of placing them inside a page. This means that they should be put in an external file – a tag in the page calls the script, which is then pulled from a different file on the web server. Doing this will make your life easier for a number of reasons, not including the SEO benefits. First of all, you can have a library of all the scripts on your site in one directory. You can then change your HTML code without worrying about damaging the scripts. It also makes for less download time when the script is used on several pages because the browser will cache the script after downloading it once.

To create an external JavaScript file, just save the text between the <SCRIPT></SCRIPT> tags in a text editor (e.g. Notepad). Then save that file on your web server as a .js file. Finally, refer to the external file by adding a src= attribute to the <SCRIPT> tag. Here is an example:


<script language=“JavaScript” type=“text/javascript”

src=“/scripts/nameoffile.js”></script>


Another JavaScript trick that will help you remove lengthy code is document.write. This can be useful for fancy navigation bars that change colors, Flash animation, and whatnot. Just follow these instructions:

  1. Type this in an external text file (create this file the same as above – in a text editor):

    <!–

    document.write(“”)

    //–>

  2. Place the code you want to remove from your page between the quotation marks in the parentheses.

  3. Save this file on your web server.

  4. Add a src= attribute to your <SCRIPT> tag so the file will be called from the HTML page.

    <script language=“JavaScript” src=“/scripts/nameoffile.js”

    type=“text/javascript”></script


If you remove the navigation bar, you should add plain text navigation somewhere (such as the bottom of the page) so that the searchbots can read and follow your navigation. My last article explains why searchbots only see what the web servers see when compiling a page, and they don’t run these types of JavaScripts.

The next trick that helps reduce clutter on your page involves Cascading Style Sheets (CSS). It is basically the same technique we used when we created external JavaScript files, only this time I’m referring to external CSS files. Using these will not only help optimize your site, but it will generally make modifying your website a lot easier. If you want to change the body or heading text in some way, all you have do is make a change to the CSS file and it will automatically affect the whole site. I’m pretty sure this is what CSS was designed for anyway.

Follow these steps to create an external CSS file:

  1. Select the code you want to remove from the HTML (everything between and including <STYLE></STYLE>) and save it in an external text file.

  2. Save this file to your web server.

  3. Use the <LINK> tag to call the file in your HTML. For example,

    <link rel=“stylesheet” href=“nameoffile.css” type=“text/css”>


As a rule of thumb, you should never copy text from Microsoft Word and paste it into your page. Doing so will leave tons of format clutter in your HTML code. If you must use Word, you should save the file as an HTML file. Then you can clean up the code with the Word-cleaning tool in your HTML-authoring program. As a last resort, you can also clean it manually using “search and replace” in a text editor (Notepad).

An image map is an image that contains multiple links. If you have these on your page, you should move all code that defines the links to the bottom of the page (right before </BODY>). Obviously, this doesn’t actually remove the clutter, but it does prevent it from clogging up the space between the top of your page and the content. This should make it easier for the searchbots to reach all pertinent information.


Let’s say you have dynamic web pages. What does this mean? Are they really energetic? Actually, no. It means that the web server pulled the page from a database program in the process of compiling it. They are created and exist only when they are requested by the browser, as opposed to a static page, which doesn’t come from a database. Once the dynamic page is requested, the data is put together with an ASP, a PHP, or a CGI program (any content management system as well).

The problem is that many searchbots have trouble reading dynamic web pages. For one, there could be hundreds of similar pages with only minor changes on each, or the pages might change too frequently. The URLs can also change, leading to dead links. They also use parameters, and some search engines may not index pages with two or more parameters. Nevertheless, there are ways to find out if your site is causing trouble for search engines.

First of all, you can check to see how many parameters are in your URL. A URL has a parameter if it ends in “something=” (e.g. ObjectGroup_ID=81). Having one parameter should be OK for the major search engines; having two or more definitely increases the likelihood of there being a problem. Your best bet is to have a page that looks static (i.e. without parameters). To be sure though, check each search engine to see whether your site is indexed (see their "help" pages for more information).

Here are some brief overviews on a number of ways to fix your dynamic web page so that search engines will look at it. Look into the ones that strike your interest:

  • You can modify URLs so that they don’t look like they’re referring to dynamic web pages by removing unnecessary characters (#, ?, *, !, and &) and reducing the number of parameters to one at the most.

  • The database program may have a way to create static copies of pages.

  • You could create static pages from the database by having it put out the entire site each time it’s updated, creating static pages and URLs.

  • See if your server has a tool for rewriting URLs, thereby converting fake (static) URLs into the real (dynamic) ones.


Session IDs can cause many of the same problems as parameters if they are in a URL. A session ID is used to identify individual users visiting a site at a specific time. It allows the server to keep track of what pages the user views and what actions he/she takes, making it easier for web developers to create interactive sites.

The server sets cookies (text files containing information only that particular server can read) on the computer with the session ID and the developer can see where the user was at the end of his/her last session. The session IDs themselves are either created and stored in cookies or placed in the URL (usually done if the user’s browser is set not to accept cookies).

Most search engines probably won’t read a page if it recognizes a session ID in the URL due to the fact that each time the searchbot returns, the original session ID will have expired, creating several URLs for the same page. And if it does read the page, it either won’t index it or will index an undetermined number of URLs pointing to the same page. However, there are ways to avoid running your site through URLs with sessions IDs.

First of all, you can store the session information in cookies on the user’s computer, despite the fact that some users block cookies. However, the server should not require cookies because searchbots don’t accept cookies (or do they?). You have to decide whether you want to take that risk. You can also omit session IDs when a searchbot requests the page from the server. Some do not recommend this technique because it can be misconstrued as cloaking (sending one page to the search engines and another to visitors), but because you actually are trying to show the same site that the visitors see, it isn’t really cloaking.

These are just a few of the major things that search engines hate. Some may have been obvious to SEO experts, but I hope you could at least get something useful out of this article. Otherwise it was probably a waste of your time. And I know you hate that.

Google+ Comments

Google+ Comments