Understand the SEO Terms

Google Page Rank

Google Page Rank (named after Larry Page) is a link analysis algorithm used by Google that measures how many links point to a website or page, and more importantly the quality or importance of the sites that provide those links. It uses a numerical scale, with 0 being the least important and 10 being the most important. In an attempt to “cheat the system", some website owners have tried to purchase links back to their website hoping for a higher Page Rank. However, those low quality links can have a negative impact and result in a lower Google Page Rank. In addition, a website may be penalized or blocked from search results, giving priority to websites and web pages that have quality linkbacks and content that is valuable to humans.

What is Alexa Ranking?

This is a ranking system set by alexa.com (a subsidiary of amazon.com) that basically audits and makes public the frequency of visits on various Web sites. The algorithm according to which Alexa traffic ranking is calculated, is simple. It is based on the amount of traffic recorded from users that have the Alexa toolbar installed over a period of three months.

This traffic is based on such parameters as reach and page views. The reach refers to the number of Alexa users who visit a particular site in one day. Page view, as its name shows, is the number of times a particular page (URL) is viewed by Alexa users. Alexa.com makes it clear though that, if a particular user visits the same URL multiple times on the same day, all those visits will be counted as one.

The first step of the ranking process is calculating the reach and number of page views for all the sites on the Web on a daily basis. The Alexa ranking is obtained by performing the geometric mean of reach and page views, averaged over a predefined period of time (three months).

 

Domain Age 

Domain age refers to the length of time that a website has been registered and active. Domain age conveys trust to website visitors and to the search engines.

Two factors are considered in the age of a domain name. First, is the length of time that a domain has been registered. Second, is the age of the website in regards to how long that it has been active, or how long that it has had crawlable content on the web. The search engines factor domain age from the time that the website is first crawled by the search engines and when the first inbound link to the website is noticed.

Domain age is important for SEO purposes because the age of a website is a search engine ranking factor. The search engines want to provide users with the best possible results. As a website ages over time, it becomes more trusted in the eyes of the search engines as long as it has quality content and a robust portfolio of relevant inbound links that have been established over time. An older domain age conveys longevity, whereas there is more uncertainty regarding what a new website can or will offer to visitors.

While domain age does convey trust to the search engines, it is only one of many ranking factors and owners of new domains shouldn’t worry that their website will never be able to compete. It may take more time, but the focus is always on quality content and gaining relevant inbound links.

Response Times

The 3 Important Limits

Summary: There are 3 main time limits to keep in mind when optimizing web and application performance.

The basic advice regarding response times has been about the same for thirty years:

  • 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
  • 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
  • 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

What is a robots txt file?

The robots exclusion standard, also known as the robots exclusion protocol or robots.txt protocol, is a standard used by websites to communicate with web crawlers and other web robots.

It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:

User-agent: *
Disallow: /

The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.

There are two important considerations when using /robots.txt:

robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don't want robots to use.
So don't try to use /robots.txt to hide information.

XML Sitemap

A Sitemap is an XML file that lists the URLs for a site. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs in the site. This allows search engines to crawl the site more intelligently.