Saturday, November 26, 2011

An Explanatory Guide To Search Engines

By Martin Rochester


A starting point for visualising the World Wide Web is to imagine of it as a network of stations in an underground train system where each of the stops is a web page or other document. Just like the trains working their way amongst the network of stops using the tracks to guide them, the search engines use their equivalent method referred to as links. There are billions of documents on the Web and they are all connected to one another by links. "Spiders" and "crawlers" are the robotic systems that move around the links gathering information from the billions of web pages and documents stored. Once the data has been logged it can go on to be analysed in the next step of the procedure. This is known as "information retrieval' and the findings are called "hits". Three search engines rule the commercial search engine market.

A network of hard drives that store the data collected by the "spiders" and "crawler" robots that are located in cities around the world connected up in a network.

The search engine operators store specially selected sections of the pages visited so that when users request information the necessary pages can be found as rapidly as possible - often in well under a second. Having set these very high standards for swiftness of information retrieval the search engines must now continue to function accordingly else they will run the risk of upsetting users.

Search engines sift through the mass of data and return the results that are relevant or useful to the original enquiry - the "search query". The search engine processes calculate the importance or relevance of the findings to make it as straight forward as possible for the user to find the information the requested. Many different factors are used to ascertain the relevance of search engine results and several types of search engines exist all using different techniques. The early search engine technology considered the search results to be relevant if they showed the key word used. The early approach was simplistic when compared against the multi layer system used now and this very simplicity lead to the difficulty of irrelevant results being listed just because they contained the key words.

The other component that comes into play when sorting through the findings is to try to establish what is important to the user. The page or documents popularity is used as a crucial factor in the mathematical equation process to determine the importance of the information to the person conducting the search. 'Ranking factors' is the term given to these. The basis for this philosophy is that information proven to be popular to other users is therefore very likely be of value to more users. By ranking the results from highest to lowest makes it is as convenient as possible for the user to find the information they require. Without the exact secret to how the big commercial search engines rank websites the field of Search Engine Optimisation has evolved offering companies a way of ascending the search engine ratings.




About the Author:



No comments:

Post a Comment