Marketing Through Search Optimization How to be found on the web 2 edition Edition Michael A - The ebook is ready for download to explore the complete content
Marketing Through Search Optimization How to be found on the web 2 edition Edition Michael A - The ebook is ready for download to explore the complete content
https://ebookfinal.com/download/the-ultimate-web-marketing-guide-1st-
edition-michael-miller/
https://ebookfinal.com/download/career-building-through-using-search-
engine-optimization-techniques-1st-edition-anastasia-suen/
https://ebookfinal.com/download/marketing-to-the-social-web-how-
digital-customer-communities-build-your-business-larry-weber/
https://ebookfinal.com/download/how-to-be-a-farmer-an-ancient-guide-
to-life-on-the-land-1st-edition-m-d-usher/
The New Relationship Marketing How to Build a Large Loyal
Profitable Network Using the Social Web 1st Edition Mari
Smith
https://ebookfinal.com/download/the-new-relationship-marketing-how-to-
build-a-large-loyal-profitable-network-using-the-social-web-1st-
edition-mari-smith/
https://ebookfinal.com/download/born-to-explore-how-to-be-a-backyard-
adventurer-richard-wiese/
https://ebookfinal.com/download/the-new-community-rules-marketing-on-
the-social-web-1st-edition-tamar-weinberg/
https://ebookfinal.com/download/one-to-one-web-marketing-build-a-
relationship-marketing-strategy-one-customer-at-a-time-second-edition-
cliff-allen/
Marketing Through Search Optimization How to be
found on the web 2 edition Edition Michael A Digital
Instant Download
Author(s): Michael A, Salter B
ISBN(s): 9780750683470, 0750683473
Edition: 2 edition
File Details: PDF, 23.34 MB
Year: 2007
Language: english
Marketing Through Search Optimization
This page intentionally left blank
Marketing Through Search Optimization
How people search and how to be found
on the Web
Second edition
The right of Alex Michael and Ben Salter to be identified as the authors of
this work has been asserted in accordance with the Copyright, Designs
and Patents Act 1988
Permissions may be sought directly from Elsevier’s Science & Technology Rights
Department in Oxford, UK: phone: (+44) (0) 1865 843830; fax: (+44) (0) 1865 853333;
email: permissions@elsevier.com. Alternatively you can submit your request online
by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting
Obtaining permission to use Elsevier material
Notice
No responsibility is assumed by the publisher for any injury and/or damage to
persons or property as a matter of products liability, negligence or otherwise,
or from any use or operation of any methods, products, instructions or ideas
contained in the material herein.
08 09 10 11 12 10 9 8 7 6 5 4 3 2 1
Acknowledgements ix
Introduction xi
v
Contents
vi
Contents
Index 229
vii
This page intentionally left blank
Acknowledgements
We would both like to thank Sprite Interactive Ltd for their support with this book.
ix
This page intentionally left blank
Introduction
Search engines provide one of the primary ways by which Internet users find websites. That’s why
a website with good search engine listings may see a dramatic increase in traffic. Everyone wants
those good listings. Unfortunately, many websites appear poorly in search engine rankings, or may
not be listed at all because they fail to consider how search engines work. In particular, submitting
to search engines is only part of the challenge of getting good search engine positioning. It’s also
important to prepare a website through ‘search engine optimization’. Search engine optimization
means ensuring that your web pages are accessible to search engines and are focused in ways that
help to improve the chances that they will be found.
This book provides information, techniques and tools for search engine optimization. This book
does not teach you ways to trick or ‘spam’ search engines. In fact, there is no such search engine
magic that will guarantee a top listing. However, there are a number of small changes you can
make that can sometimes produce big results.
The book looks at the two major ways search engines get their listings:
xi
Introduction
change your web pages, crawler-based search engines eventually find these changes, and that can
affect how you are listed. This book will look at the spidering process and how page titles, body
copy and other elements can all affect the search results.
Human-powered directories
A human-powered directory, such as Yahoo! or the Open Directory, depends on humans for its
listings. The editors at Yahoo! will write a short description for sites they review. A search looks
for matches only in the descriptions submitted.
Changing your web pages has no effect on your listing. Things that are useful for improving a
listing with a search engine have nothing to do with improving a listing in a directory. The only
exception is that a good site, with good content, might be more likely to get reviewed for free
than a poor site.
The index, sometimes called the catalog, is like a giant book containing a copy of every web page
that the spider finds. If a web page changes, then this book is updated with new information.
Sometimes it can take a while for new pages or changes that the spider finds to be added to the
index, and thus a web page may have been ‘spidered’ but not yet ‘indexed’. Until it is indexed –
added to the index – it is not available to those searching with the search engine.
Search engine software is the third part of a search engine. This is the program that sifts through
the millions of pages recorded in the index to find matches to a search and rank them in order
of what it believes is most relevant.
xii
Introduction
To begin with, some search engines index more web pages than others. Some search engines
also index web pages more often than others. The result is that no search engine has the exact
same collection of web pages to search through, and this naturally produces differences when
comparing their results.
Many web designers mistakenly assume that META tags are the ‘secret’ in propelling their web
pages to the top of the rankings. However, not all search engines read META tags. In addition,
those that do read META tags may chose to weight them differently. Overall, META tags can
be part of the ranking recipe, but they are not necessarily the secret ingredient.
Search engines may also penalize pages, or exclude them from the index, if they detect search
engine ‘spamming’. An example is when a word is repeated hundreds of times on a page, to
xiii
Introduction
increase the frequency and propel the page higher in the listings. Search engines watch for
common spamming methods in a variety of ways, including following up on complaints from
their users.
Off-the-page factors
Crawler-based search engines have plenty of experience now with webmasters who constantly
rewrite their web pages in an attempt to gain better rankings. Some sophisticated webmasters may
even go to great lengths to ‘reverse engineer’ the location/frequency systems used by a particular
search engine. Because of this, all major search engines now also make use of ‘off-the-page’
ranking criteria.
Off-the-page factors are those that a webmaster cannot easily influence. Chief among these is link
analysis. By analysing how pages link to each other, a search engine can determine both what a
page is about and whether that page is deemed to be ‘important’, and thus deserving of a ranking
boost. In addition, sophisticated techniques are used to screen out attempts by webmasters to
build ‘artificial’ links designed to boost their rankings.
Another off-the-page factor is click-through measurement. In short, this means that a search
engine may watch which results someone selects for a particular search, then eventually drop
high-ranking pages that aren’t attracting clicks while promoting lower-ranking pages that do pull
in visitors. As with link analysis, systems are used to compensate for artificial links generated by
eager webmasters.
xiv
Chapter 1
Introduction to search engine optimization
To implement search engine optimization (SEO) effectively on your website you will need to
have a knowledge of what people looking for your site are searching for, your own needs, and
then how to best implement these. Each SEO campaign is different, depending on a number of
factors – including the goals of the website, and the budget available to spend on the SEO. The
main techniques and areas that work today include:
This book will teach you about all this, but initially Chapter 1 will take you through the
background to search optimization. First of all we will look at the history of search engines, to
give you a context to work in, and then we’ll take a look at why people use search engines,
what they actually search for when they do, and how being ranked highly will benefit your
organization. Next we will provide a critical analysis of choosing the right SEO consultancy (if
you have to commission an external agency).
1
Marketing Through Search Optimization
who wanted to share a file had first to set up an FTP server to make the file available. The only
way people could find out where a file was stored was by word-of-mouth; someone would have
to post on a message board where a file was stored.
The first ever search engine was called Archie, and was created in 1990 by a man called
Alan Emtage. Archie was the solution to the problem of finding information easily; the engine
combined a data gatherer, which compiled site listings of FTP sites, with an expression matcher
that allowed it to retrieve files from a user typing in a search term or query. Archie was the first
search engine; it ‘spidered’ the Internet, matched the files it had found with search queries, and
returned results from its database.
In 1993, with the success of Archie growing considerably, the University of Nevada developed
an engine called Veronica. These two became affectionately known as the grandfather and
grandmother of search engines. Veronica was similar to Archie, but was for Gopher files rather
than FTP files. Gopher servers contained plain text files that could be retrieved in the same way
as FTP files. Another Gopher search engine also emerged at the time, called Jughead, but this
was not as advanced as Veronica.
The next major advance in search engine technology was the World Wide Web Wanderer,
developed by Matthew Gray. This was the first ever robot on the Web, and its aim was to track
the Web’s growth by counting web servers. As it grew it began to count URLs as well, and this
eventually became the Web’s first database of websites. Early versions of the Wanderer software
did not go down well initially, as they caused loss of performance as they scoured the Web and
accessed single pages many times in a day; however, this was soon fixed. The World Wide Web
Wanderer was called a robot, not because it was a robot in the traditional sci-fi sense of the
word, but because on the Internet the term robot has grown to mean a program or piece of
software that performs a repetitive task, such as exploring the net for information. Web robots
usually index web pages to create a database that then becomes searchable; they are also known
as ‘spiders’, and you can read more about how they work in relation to specific search engines in
Chapter 4.
After the development of the Wanderer, a man called Martijn Koster created a new type of web
indexing software that worked like Archie and was called ALIWEB. ALIWEB was developed
in the summer of 1993. It was evident that the Web was growing at an enormous rate, and
it became clear to Martijn Koster that there needed to be some way of finding things beyond
the existing databases and catalogues that individuals were keeping. ALIWEB actually stood
for ‘Archie-Like Indexing of the Web’. ALIWEB did not have a web-searching robot; instead
of this, webmasters posted their own websites and web pages that they wanted to be listed.
ALIWEB was in essence the first online directory of websites; webmasters were given the
opportunity to provide a description of their own website and no robots were sent out, resulting
in reduced performance loss on the Web. The problem with ALIWEB was that webmasters
had to submit their own special index file in a specific format for ALIWEB, and most of them
did not understand, or did not bother, to learn how to create this file. ALIWEB therefore
2
Chapter 1: Introduction to search engine optimization
suffered from the problem that people did not use the service, as it was only a relatively small
directory. However, it was still a landmark, having been the first database of websites that
existed.
The World Wide Web Wanderer inspired a number of web programmers to work on the
idea of developing special web robots. The Web continued growing throughout the 1990s, and
more and more powerful robots were needed to index the growing number of web pages. The
main concept behind spiders was that they followed links from web page to web page – it was
logical to assume that every page on the Web was linked to another page, and by searching
through each page and following its links a robot could work its way through the pages on
the Web. By continually repeating this, it was believed that the Web could eventually be
indexed.
At the end of December 1993 three search engines were launched that were powered by these
advanced robots; these were the JumpStation, the World Wide Web Worm, and the Repository
Based Software Engineering Spider (RBSE). JumpStation is no longer in service, but when it
was it worked by collecting the title and header from web pages and then using a retrieval system
to match these to search queries. The matching system searched through its database of results
in a linear fashion and became so slow that, as the Web grew, it eventually ground to a halt.
The World Wide Web Worm indexed titles and URLs of web pages, but like the JumpStation
it returned results in the order that it found them – meaning that results were in no order of
importance. The RBSE spider got around this problem by actually ranking pages in its index
by relevance.
All the spiders that were launched around this time, including Architext (the search software that
became the Excite engine), were unable to work out actually what it was they were indexing;
they lacked any real intelligence. To get around this problem, a product called Elnet Galaxy was
launched. This was a searchable and browsable directory, in the same way Yahoo! is today (you
can read more about directories in Chapter 4). Its website links were organized in a hierarchical
structure, which was divided into subcategories and further subcategories until users got to the
website they were after. Take a look at the Yahoo! directory for an example of this in action today.
The service, which went live in January 1994, also contained Gopher and Telnet search features,
with an added web page search feature.
The next significant stage came with the creation of the Yahoo! directory in April 1994, which
began as a couple of students’ list of favourite web pages, and grew into the worldwide phe-
nomenon that it is today. You can read more about the growth of Yahoo! in Chapter 4 of this
book, but basically it was developed as a searchable web directory. Yahoo! guaranteed the quality
of the websites it listed because they were (and still are) accepted or rejected by human editors.
The advantage of directories, as well as their guaranteed quality, was that users could also read
a title and description of the site they were about to visit, making it easier to make a choice to
visit a relevant site.
3
Marketing Through Search Optimization
The first advanced robot, which was developed at the University of Washington, was called
WebCrawler (Figure 1.1). This actually indexed the full text of documents, allowing users to
search through this text, and therefore delivering more relevant search results.
WebCrawler was eventually adopted by America Online (AOL), who purchased the system.
AOL ran the system on its own network of computers, because the strain on the University of
Washington’s computer systems had become too much to bear, and the service would have been
shut down otherwise. WebCrawler was the first search engine that could index the full text of
a page of HTML; before this all a user could search through was the URL and the description
of a web page, but the WebCrawler system represented a huge change in how web robots
worked.
The next two big guns to emerge were Lycos and Infoseek. Lycos had the advantage in the sheer
size of documents that it indexed; it launched on 20 July 1995 with 54 000 documents indexed,
and by January 1995 had indexed 1.5 million. When Infoseek launched it was not original in its
technology, but it sported a user-friendly interface and extra features such as news and a directory,
which won it many fans. In 1999, Disney purchased a 45 per cent stake of Infoseek and integrated
it into its Go.com service (Figure 1.2).
4
Chapter 1: Introduction to search engine optimization
In December 1995 AltaVista came onto the scene and was quickly recognized as the top search
engine due to the speed with which it returned results (Figure 1.3). It was also the first search
engine to use natural language queries, which meant users could type questions in much the
same way as they do with Ask Jeeves today, and the engine would recognize this and not return
irrelevant results. It also allowed users to search newsgroup articles, and gave them search ‘tips’
to help refine their search.
On 20 May 1996 Inktomi Corporation was formed and HotBot was created (Figure 1.4).
Inktomi’s results are now used by a number of major search services. When it was launched
HotBot was hailed as the most powerful search engine, and it gained popularity quickly. HotBot
claimed to be able to index 10 million web pages a day; it would eventually catch up with
itself and re-index the pages it had already indexed, meaning its results would constantly stay up
to date.
Around the same time a new service called MetaCrawler was developed, which searched a
number of different search engines at once (Figure 1.5). This got around the problem, noticed
by many people, of the search engines pulling up completely different results for the same search.
5
Discovering Diverse Content Through
Random Scribd Documents
CONTENTS.
PAGE
BADMINTON LIBRARY (THE) 1010
BIOGRAPHY, PERSONAL MEMOIRS, &c. 1007
CHILDREN’S BOOKS 1025
CLASSICAL LITERATURE TRANSLATIONS, &c. 1018
COOKERY, DOMESTIC MANAGEMENT, &c. 1028
EVOLUTION, ANTHROPOLOGY, &c. 1017
FICTION, HUMOUR, &c. 1021
FUR, FEATHER AND FIN SERIES 1012
HISTORY, POLITICS, POLITY, POLITICAL MEMOIRS, &c. 1003
LANGUAGE, HISTORY AND SCIENCE OF 1016
LONGMANS’ SERIES OF BOOKS FOR GIRLS 1026
MANUALS OF CATHOLIC PHILOSOPHY 1016
MENTAL, MORAL, AND POLITICAL PHILOSOPHY 1014
MISCELLANEOUS AND CRITICAL WORKS 1029
MISCELLANEOUS THEOLOGICAL WORKS 1032
POETRY AND THE DRAMA 1019
POLITICAL ECONOMY AND ECONOMICS 1016
POPULAR SCIENCE 1024
SILVER LIBRARY (THE) 1027
SPORT AND PASTIME 1010
STUDIES IN ECONOMICS AND POLITICAL SCIENCE 1017
TRAVEL AND ADVENTURE, THE COLONIES, &c. 1008
VETERINARY MEDICINE, &c. 1010
WORKS OF REFERENCE 1025
INDEX OF AUTHORS AND EDITORS.
Abbott (Evelyn), 1003, 1018
—— (T. K.), 1014
—— (E. A.), 1014
Acland (A. H. D.), 1003
Acton (Eliza), 1028
Adeane (J. H.), 1007
Æschylus, 1018
Ainger (A. C.), 1011
Albemarle (Earl of), 1010
Allen (Grant), 1024
Allingham (F.), 1021
Amos (S.), 1003
André (R.), 1012
Anstey (F.), 1021
Archer (W.), 1008
Aristophanes, 1018
Aristotle, 1014, 1018
Armstrong (G. F. Savage), 1019
—— (E. J. Savage), 1007, 1019, 1029
Arnold (Sir Edwin), 1008, 1019
—— (Dr. T.), 1003
Ashbourne (Lord), 1003
Ashby (H.), 1028
Ashley (W. J.), 1016
Atelier du Lys (Author of), 1029
Ayre (Rev. J.), 1025
Tacitus, 1018
Taylor (Col. Meadows), 1006
Tebbutt (C. G.), 1011
Thornhill (W. J.), 1018
Thornton (T. H.), 1008
Todd (A.), 1006
Toynbee (A.), 1017
Trevelyan (Sir G. O.), 1006, 1007
—— (C. P.), 1017
—— (G. M.), 1006
Trollope (Anthony), 1023
Tupper (L.), 1020
Turner (H. G.), 1031
Tyndall (J.), 1007, 1009
Tyrrell (R. Y.), 1018
Tyszkiewicz (M.), 1031
Miscellaneous Writings
People’s Edition. 1 vol. Cr. 8vo., 4s. 6d.
Library Edition. 2 vols. 8vo., 21s.
Speeches and Poems.
Popular Edition. Crown 8vo., 2s. 6d.
Cabinet Edition. 4 vols. Post 8vo., 24s.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookfinal.com