Tag Archives: doesn
Why Keyword Stuffing Doesn’t Work – Decrypting the Basics of Google Search Engine Behavior
Why Keyword Stuffing Doesn’t Work – Decrypting the Basics of Google Search Engine Behavior
Keyword stuffing is an archaic and unethical search engine optimization (SEO) technique. It involves loading the meta tags and/or content of a web page with keywords in unnecessary quantities. Many (too many) use it in an attempt to increase search engine ranking and online visibility. Anybody in the SEO business is aware that keyword stuffing is an outdated tactic that invites penalties from Google.
Yet people still try.
Saving a discussion of the psychology behind this behavior for another day, we can use keyword stuffing as a backdrop for learning the basics of Google search engine behavior. Considering Google can claim over 66% of the search traffic, it is appropriate to use it as the featured search engine in this article.
The Google search engines and placement on its search engine results page (SERP) have the power to make or break a business. So it’s best for any business owner to be knowledgeable of the basics of Google search engine behavior.
It really all boils down to two things: A library packed with trillions (literally) of web pages, and arachnid minions that keep its shelves full and up-to-date.
The Dominant Duo of Search Engine Behavior:
1.The Spider Spiders, (AKA Google bots) are automated programs meant to find what’s new on the Internet. They are controlled by algorithms, which can be likened to the brain and nervous system of each spider. They are constantly sent on expeditions to read content and find links. As they crawl, they make a copy of each page for the search engine to review. Spiders don’t strike only once; they return to each page time and again to look for changes. The only thing that can stop them is a file.
2.The Index The index can be thought of as a library of every page the spiders crawl upon. When a spider finds that a web page has changed, the index will note the change in the corresponding copy of the webpage in its catalogue. Its sole duty is to allow meaningful information to be found quickly.
The Web Page Process: From Creation to Search Engine Consumption To build a web page, a web designer arranges the meta tags, content, images, and video on the page, then throws in the links and, voila! – a web page is born. This web page does not automatically go live on the Internet, however. It must first be crawled. This is where the spiders come in. A spider finds the web page by following a cascading trail of links from other pages. It then analyzes the words on the page, looking for and copying significant words. This process, called ‘web crawling’, can take anywhere from 24 hours to several weeks. It involves 4 steps:
1.The spider first consults with the files to determine which of the pages it is welcome to crawl upon, and which pages are off limits. 2.It consults a site map to orient itself and plan its crawl (hence the reason site maps are so important). 3.It then crawls (usually starting with the home page) and begins to index the significant words of the content, storing them away for the search engine to review. 4.The spider then follows any links (web addresses or URLs) that lead to other pages. And the process repeats itself.
Once the spider has indexed the significant words of a web page, the search engine begins to decide which pages are ‘meaningful’ and which pages are not. A meaningful page would be a page that is well written, with keywords placed in the correct areas, with relevant links. More importantly, a meaningful page has content that pertains to its topic. Meaningful pages will rank higher on the SERP than non-meaningful pages.
Google looks at several aspects of the words on each page: •The number of times words appear on the page. If a word occurs more than once, it is likely that it is associated with the topic of the page. • Where they appear on the page. For example, the search engine assigns higher value to pages that have relevant words located towards the top of the page, or “above the fold.”
These rules give pages meaningfulness-ratings from the search engines, not to mention giving SEO experts ways to optimize their web pages. It is important to note that keyword phrases are not the only thing that the Google spiders review – one must not disregard links and quality content.
Google wants meaningful pages because meaning translates into search-accuracy, which in turn leads to happy people who find pages that relate to the topic they are searching for. Their happiness translates into ‘repeat business’ for the search engine. And when it comes down to it, search engines are really just businesses… right?
So, where does keyword stuffing fit in? If the frequency of a word on the page increases meaningfulness, then why not just stuff the page chock full of keywords? Seems logical enough until you realize that keyword density has a curvilinear relationship. Too little is not meaningful, but too much is not meaningful either – neither of these provide the reader with meaningful information.
Meaningfulness decreases for the person when the page is stuffed full of keyword phrases that suffocate the rest of the content of the page. Thus there is a sweet spot that usually floats between 2% and 3% keyword density. In order to keep the readers happy, it is in the best interests of the Google search engine to penalize or even ‘black list’ any pages that are utilizing this method.
Such pages tumble to the bottom of the SERP, often never to return, and never to enjoy a steady flow of traffic.