Google SearchWiki

You may have noticed a few additional buttons and icons your Google search results lately. Google has rolled out a feature, called SearchWiki, that lets you rate up good items in your search results and remove irrelevant results. You have to be logged in with a Google account for these to show up.

Your ratings will be visible to others although they will not impact the order of others results. This provides a way, approved by Google, to modify your personal search results. It will be interesting to see if Google will use this input to modify overall search results now or in the future.

Quoted on MSNBC.com about Cleaning Up Your 'Digital Dirt'

I was quoted yesterday by an MSNBC.com columnist about what to do when your online history, as shown by Google search results for your name, begin to cause career problems.

Here is the main portion that quotes me:

Many of us may want to find ways to erase the negative information about us on the Web, but that may not be the best strategy.

“What to do when you don’t like the impression given by your online persona?” asks C. David Gammel, a corporate technology consultant. “The counterintuitive response is the best: Post even more content about yourself online.”

However, he adds: “The content should be of a nature that is at least neutral, at best positive, for your career prospects. Blog about your professional interests. Discuss research you have conducted yourself on a topic of interest.”

Gammel believes in burying the Internet skeletons in positive cyber dust. “Once the less savory items are pushed off your first page of ego search results on Google, you’ll be fine with most people,” he notes. “That’s why you have to post more, not less, to get rid of the impact of those skeletons.”

The same thing is true for organizations as well.

Google Says Dynamic URLs OK (Must update my soapbox!)

Google posted to their official weblog today a bit of background on how they can process dynamic URLs. A dynamic URL is one that contains lots of junk that humans can’t read, including symbols. A LOT of system still create URLs like this, especially when the web page is created from data stored in a database.

It used to be common knowledge that URLs with natural language words in them did better in natural search results placement than dynamic URLs because Google would be better able to process them. It appears that is no longer operative. The key graph from the post today on that topic:

Official Google Webmaster Central Blog: Dynamic URLs vs. static URLs:

While static URLs might have a slight advantage in terms of clickthrough rates because users can easily read the urls, the decision to use database-driven websites does not imply a significant disadvantage in terms of indexing and ranking. Providing search engines with dynamic URLs should be favored over hiding parameters to make them look static.

While there is still benefit to displaying static URLs for the human using your site it seems that, from Google’s perspective, it’s not worth doing just for search engine placement. Interesting!

Robots.txt Protocol Enhanced by Big Search Engine Companies

I learned today, via Search Tools, that Microsoft, Yahoo! and Google have agreed to specific extensions to the robots.txt file protocol. All of their search engines will now honor additional directives. More info from Yahoo! and Google.

What is robots.txt some of you may be asking? It is a simple text file you can place on your web site to tell search engine spiders what parts of your site they should index and which they should ignore. It has been around for a long time and these are the first additions to the standard in at least a decade.

Internal Search Stats Coming to Google Analytics Soon

Google announced new features for Google Analytics this week. One of the best is the addition of internal search reporting. This will allow you track queries on your internal search engine in pretty great detail. Here is a nice preview of the feature and how to use it effectively.

Tracking your internal search traffic (searches within your own site) is excellent data for understanding what your site visitors are looking for and where they may be having problems finding content.

World Bank 2.0: The BuzzMonitor

I just heard about a new open source application for tracking discussion of specific issues in social media (blogs, tags, podcasts, wikis, etc.) online: The BuzzMonitor. This was developed by the World Bank for their own purposes and then released as an open source application. From the about page:

Like many organizations, we started listening to blogs and other forms of social media by subscribing to a blog search engine RSS feed but quickly understood it was not enough. The World Bank is a global institution and we needed to listen in multiple languages, across multiple plaforms. We needed something that would aggregate all this content, help us make sense of it and allow us to collaborate around it. At the time, no solution (either commercial or open source) met those requirements so we decided to build our own.

We were playing with Drupal, a solid, open-source content and community platform for different pilots. Drupal being so flexible and module oriented, we decided to write the specifications for a “super aggregator” that would help us people understand, follow and collaborate around mentions of the organization online.

I asked Pierre Guillaume, who announced it on the Social Media Measurement Group on Facebook, how they are using it internally at the World Bank. His response:

Thanks David. We are rolling it out to communicators across the bank with a guide on how to use tagging, voting, rss feeds etc…there is, not surprisingly, a bit of a learning curve both in terms of “getting” social media and using the tool but some champions are emerging, embedding findings obtained through the buzzmonitor in their regular comm and web reports, adding relevant bloggers to their contacts etc.. We also feature the most recently voted on items on a page available two clicks down from the intranet home page, for all staff to see.

Sounds like a great tool for raising awareness of how issues important to the Bank are evolving online. I recommend listening to the online conversation as a key activity for any organization and this looks like a great tool for assisting in that. I have downloaded the application and will give it a try this week.

Google Custom Search Business Edition

Google launched a new hosted search product this week: Google Custom Search Business Edition.

This will give you a Google-based search engine for your site, running on Google’s servers, without advertising or Google logos on it. You can also get search results as XML, which makes it possible to create a completely custom results page or to embed search results in parts of your site as related content.

However, you cannot index any content behind a login, which will rule it out for most membership-based web sites. Searching secured content will still require one of their appliances.

It costs $100 a year for up to 500 pages and $500 for up to 50,000 pages.

Microsoft's Hammer

Is it just me or has Microsoft Office SharePoint Server 2007 (MOSS 2007) mania taken over the IT world?

I have heard lots of buzz about this package, especially in the association industry, but I’ve yet to see the overwhelming value in MOSS’s interfaces and services over previous versions of SharePoint. MOSS is nice for collaboratively managing documents and searching but beyond that basic project work I think its interface gets in the way. It is a horrible community platform compared to many of the open source and low-cost solutions already available.

Not to mention the organizations that are diving in head first and planning on using MOSS (with MS CMS rolled in) as the total solution for their intranet and public web sites. There is a good reason that different classes of solutions have evolved for public and intranet sites: they have vastly differing requirements for most organizations.

My advice is to bide your time and carefully consider which nails you ultimately decide to whack with the MOSS hammer.

Big Book Stores and Amazon

So, when you compare Amazon to Barnes and Noble or Borders (just on book selling), how are they fundamentally different?

All three sell online and, while Amazon is still the best, the other two have reasonably easy interfaces for selling books. What is left? Physical stores. B&N and Borders have the liability and asset of a physical retail presence in many communities across the country. However, they fail horribly to the leverage the two together to improve overall sales.

If you are looking for a physical retail store, it is likely because you want to buy a book right away. If you are willing to wait a few days, you can just order online. But if you want it right now, say before you catch a flight that afternoon, you want to know if the store near you is carrying the title before making the trek out there. Making retail inventory available for search by store seems like a no-brainer. It relieves floor staff from having to answer as many phone calls and enables customers to find out if they can buy more immediately.

However, Borders buries this feature several levels down in their site and B&N doesn’t even offer it. What a wasted opportunity.

The ideal interface, I think, would be to set a cookie for the user’s zip code at some point and then offer local retail inventory results along with online inventory.

Gee, that sounds simple. Why don’t they do it? My guess would be that their performance measures don’t reward cross-selling between physical and online operations.