Why Natural Search Engine Placement Is Risky as a Primary Strategy

This Wall Street Journal article, How Search-Engine Rules Cause Sites to Go Missing, provides several examples of why relying on search engine driven traffic to your site as a primary strategy brings along some risks. Your business is subject to significant impact from relatively minor adjustments to the search engine algorithms and policies.

That said, the main example in the article is of a news web site that wants to change its domain name from a .net to a .com for branding reasons (after paying $1 million for the .com address):

Such a simple change, Mr. Skrenta has discovered, could have disastrous short-term results. About 50% of visits to his news site come through a search engine — and about 90% of the time, that is Google. Some companies say their sites have disappeared from top search results for weeks or months after making address switches, due to quirky rules Google and other search engines have adopted. So the same user who typed “Anna Nicole Smith news” into Google last week and saw Topix.net as a top result might not see it at all after the change to Topix.com.

Even if traffic to Topix, which gets about 10 million visitors a month, dropped just 10%, that would essentially be a 10% loss in ad revenue, Mr. Skrenta says. “Because of this little mechanical issue, it could be a catastrophe for us,” he says.

Since Google ascribes credibility to results on domains that it trusts, changing your domain name can have significant impact, as topix is discovering.

Any business model should be flexible enough to not be overly dependent on one source of business. For most organizations, search engine placement should be an important but not overarching strategy for the company.

Free Open Source Search Tool from IBM and Yahoo!

IBM has released a free enterprise search engine, IBM OmniFind Yahoo! Edition. The engine includes some technology from IBM’s OmniFind product, so this is probably prositioned as an entry level introduction to their commercial product. It is a direct challenge to Google’s Mini search appliance, according to this story on CIO.com. Yahoo! seems to have contributed some interface design expertise for the management interface.

Purple Search

Google has posted a video of a talk that Seth Godin, my favorite marketing guru these days, gave to Google employees recently. It is a synthesis of material from many of his books and is great stuff. Seth has been following up with several blogs posts, going into more depth on points he discussed in the video.

I spotted this via about 30 feeds I subscribe to. When something shows up multiple times in several feeds in a short time frame, you know there is something to it.

Nielsen on the importance of converting search engine ad traffic

Jakob Nielsen has posted a short article you should read about the importance of converting search engine advertising traffic: Search Engines as Leeches on the Web.

Search engines extract too much of the Web’s value, leaving too little for the websites that actually create the content. Liberation from search dependency is a strategic imperative for both websites and software vendors.

Worth a read if you are using advertising on search engines to drive traffic to your web site.

Boxwood Technology Adds RSS to Job Board Service

Boxwood Technology is pretty much on top of the heap for hosted job board services for associations. (Disclaimer: I was a client of theirs when I worked at ASHA and I serve with Boxwood Chairman John Bell on the ASAE Tech Council.) They have just added RSS feeds to their service, which is a fantastic extension. Now job seekers can subscribe to all new jobs or to the results of a specific search. After they subscribe, any newly posted jobs will appear in their newsreader of choice. Nice! They should mention this service on their web site.

For an example, see ASAE’s job center. There is an orange RSS button at the bottom of the screen.

A couple of improvements I think they could make include:

  • Include an RSS autodiscovery tag in the page markup that will allow people to more easily subscribe with newsreaders that look for the tag.
  • Make the RSS buttom a direct link to the RSS feed rather than a pop-up window (not sure why you would want a pop-up unless they are trying to discourage indexing of the feeds).
  • Add some buttons for easily subscribing via some of the more popular online newsreaders (such as Yahoo!, bloglines, google, etc.).

Verity Ultraseek: Free Download

Verity’s Ultraseek, a search engine, is now available for download and free trial. This is the tool we used at ASHA when I was working there. Excellent ability to tune results and the interface can be customized relatively easily using Python and HTML (although the templates were rather incomprehensible spaghetti code, which is hard to do with Python, normally). Hopefully the spaghetti issue was improved with the latest release.

Spotted via SearchTools.

Paging Robert Scoble: Tell msnbot to Calm Down

I’m posting this note with Robert Scoble‘s name in it in order to get some attention from Microsoft about the behavior of their RSS bot, msnbot.

Over the past week, the bot has hit my site over 27k times for about 38mb of bandwidth. The bot is almost exclusively hitting RSS feeds. However, most of the feeds it is getting on my site are for individual entries, which allow people to track comments. Each feed is getting hit about 100x a week. I would think that is a big waste of effort for older entries that get few comments. Once a day should be plenty.

So, Robert, when you see this in one of your ego notifications, please pass the word to whoever manages msnbot to chill out a bit on the hits. I love to be indexed but not at such a heavy load which is wasting my bandwidth and MS’s. If the load goes much higher I might ban the bot for poor manners.

Search Log Analysis Book

Lou Rosenfeld is coauthoring a book on search log analysis. Excellent!

Based on my recent posting, it might not come as a huge surprise that I’m co-authoring (with Rich Wiggins) a new book on search log analysis (SLA). I’m happy to report that we’re already a couple chapters deep and I’m actually enjoying the process of writing, which usually requires a lot more self-discipline than my genetic programming supports.

I’m gung-ho on SLA because it seems so obvious, and yet it’s still uncommon in the worlds of UCD and, more broadly, web design. Rich and I hope our book helps clear away many barriers to SLA–practical, technical, and political–by collecting both how-to info and justification in a single, short book.

When I was at ASHA, we found that reviewing our search logs on a regular basis told us a lot of great information about what people are looking for and where we needed to pay some attention. We would identify searches that didn’t return the appropriate content and we would take steps to tweak the content to float up higher or do it manually through best bet links. We would also identify searches for which we had no content, giving us great ideas on what we should add to the site.

Can’t wait to see what Lou and Rich come up with in the book.

Crawling Robots!

Search Engine World did a crawl recently of 75k robots.txt files. (robots.txt files contain instructions for search engines that index your site. You can use them to prevent search engines from indexing certain directories, blocking specific search engines, etc.) They report on their findings of common errors made in the files.

The worst robots.txt error I ever saw was for a site whose owners complained that they never showed up in google search results. I took a peek at their robots.txt file and sure enough someone had set it to disallow all search engines. Oops! This was probably a leftover from when the site was in development. Have you checked your robots.txt file recently?

Google Sitemap Protocol

Google has announced a new protocol that they are now using to better index web sites: Sitemap Protocol

he Sitemap Protocol allows you to inform search engine crawlers about URLs on your Web sites that are available for crawling. A Sitemap consists of a list of URLs and may also contain additional information about those URLs, such as when they were last modified, how frequently they change, etc.”

I imagine it won’t be long before most of the major CMSs out there have the ability to create one of these sitemap files. The primary benefit is to reveal pages to search engine crawlers that they would not find via their normal crawling of your site.