Type this in an external text file (create this file the same as above – in a text editor):
Place the code you want to remove from your page between the quotation marks in the parentheses.
Save this file on your web server.
Add a src= attribute to your <SCRIPT> tag so the file will be called from the HTML page.
Follow these steps to create an external CSS file:
Select the code you want to remove from the HTML (everything between and including <STYLE></STYLE>) and save it in an external text file.
Save this file to your web server.
Use the <LINK> tag to call the file in your HTML. For example,
<link rel=“stylesheet” href=“nameoffile.css” type=“text/css”>
As a rule of thumb, you should never copy text from Microsoft Word and paste it into your page. Doing so will leave tons of format clutter in your HTML code. If you must use Word, you should save the file as an HTML file. Then you can clean up the code with the Word-cleaning tool in your HTML-authoring program. As a last resort, you can also clean it manually using “search and replace” in a text editor (Notepad).
An image map is an image that contains multiple links. If you have these on your page, you should move all code that defines the links to the bottom of the page (right before </BODY>). Obviously, this doesn’t actually remove the clutter, but it does prevent it from clogging up the space between the top of your page and the content. This should make it easier for the searchbots to reach all pertinent information.
Let’s say you have dynamic web pages. What does this mean? Are they really energetic? Actually, no. It means that the web server pulled the page from a database program in the process of compiling it. They are created and exist only when they are requested by the browser, as opposed to a static page, which doesn’t come from a database. Once the dynamic page is requested, the data is put together with an ASP, a PHP, or a CGI program (any content management system as well).
The problem is that many searchbots have trouble reading dynamic web pages. For one, there could be hundreds of similar pages with only minor changes on each, or the pages might change too frequently. The URLs can also change, leading to dead links. They also use parameters, and some search engines may not index pages with two or more parameters. Nevertheless, there are ways to find out if your site is causing trouble for search engines.
First of all, you can check to see how many parameters are in your URL. A URL has a parameter if it ends in “something=” (e.g. ObjectGroup_ID=81). Having one parameter should be OK for the major search engines; having two or more definitely increases the likelihood of there being a problem. Your best bet is to have a page that looks static (i.e. without parameters). To be sure though, check each search engine to see whether your site is indexed (see their "help" pages for more information).
Here are some brief overviews on a number of ways to fix your dynamic web page so that search engines will look at it. Look into the ones that strike your interest:
You can modify URLs so that they don’t look like they’re referring to dynamic web pages by removing unnecessary characters (#, ?, *, !, and &) and reducing the number of parameters to one at the most.
The database program may have a way to create static copies of pages.
You could create static pages from the database by having it put out the entire site each time it’s updated, creating static pages and URLs.
See if your server has a tool for rewriting URLs, thereby converting fake (static) URLs into the real (dynamic) ones.
Session IDs can cause many of the same problems as parameters if they are in a URL. A session ID is used to identify individual users visiting a site at a specific time. It allows the server to keep track of what pages the user views and what actions he/she takes, making it easier for web developers to create interactive sites.
The server sets cookies (text files containing information only that particular server can read) on the computer with the session ID and the developer can see where the user was at the end of his/her last session. The session IDs themselves are either created and stored in cookies or placed in the URL (usually done if the user’s browser is set not to accept cookies).
Most search engines probably won’t read a page if it recognizes a session ID in the URL due to the fact that each time the searchbot returns, the original session ID will have expired, creating several URLs for the same page. And if it does read the page, it either won’t index it or will index an undetermined number of URLs pointing to the same page. However, there are ways to avoid running your site through URLs with sessions IDs.
First of all, you can store the session information in cookies on the user’s computer, despite the fact that some users block cookies. However, the server should not require cookies because searchbots don’t accept cookies (or do they?). You have to decide whether you want to take that risk. You can also omit session IDs when a searchbot requests the page from the server. Some do not recommend this technique because it can be misconstrued as cloaking (sending one page to the search engines and another to visitors), but because you actually are trying to show the same site that the visitors see, it isn’t really cloaking.
These are just a few of the major things that search engines hate. Some may have been obvious to SEO experts, but I hope you could at least get something useful out of this article. Otherwise it was probably a waste of your time. And I know you hate that.