Last summer, I read the article by Kas Thomas from CMS Watch titled “Best Bets – a Worst Practice” with some interest. I found his thesis to be provocative and posted a note to the SearchCoP community asking for other’s insights on the use of Best Bets. I received a number of responses taking some issue with Kas’ concept of what best bets is and some also some responses describing different means to manage best bets (hopefully without requiring the “serious amounts of human intervention” described by Kas.
In this post, I’ll provide a summary of sorts and also describe some of the ways described for managing best bets and also the way we have managed best bets.
Kas’ thesis is that best bets are not a good practice because they are largely a hack layered on top of a search engine and require significant manual intervention. Further, if your search engine isn’t already providing access to appropriate “best bets” for queries, you should get yourself a new search engine.
Are Best Bets Worth the Investment?
Some of the most interesting comments from the thread of discussion on the SearchCoP include (I’ll try to provide as cohesive picture of sentiment as I can but will only provide parts of the discussion – if I have portrayed intent incorrectly – that’s my fault and not the original author):
From Tim W:
“Search analytics are not used to determine BB … BB are links commonly used, enterprise resources that the search engine may not always rank highly because for a number of reasons. For example, lack of metadata, lack of links to the resource and content that does not reflect how people might look for the document. Perhaps it is an application and not a document at all.”
From Walter U:
“…manual Best Bets are expensive and error-prone. I consider them a last resort.”
From Jon T:
“Best Bets are not just about pushing certain results to the top. It is also about providing confidence in the results to users.
If you separate out Best Bets from the automatic results, it will show a user that these have been manually singled out as great content – a sign that some quality review has been applied.”
From Avi R:
“Best Bets can be hard to manage, because they require resources.
If no one keeps checking on them, they become stale, full of old content and bad links.
…
Best Bets are also incredibly useful.
They’re good for linking to content that can’t be indexed, and may even be on another site entirely. They’re good for dealing with … all the sorts of things that are obvious to humans but don’t fit the search paradigm.”
So, lots of differing opinions on best bets and their utility, I guess.
A few more pieces of background for you to consider: Walter U has posted on his blog (Most Casual Observer) a great piece titled “Good to Great Search” that discusses best bets (among other things); and, Dennis Deacon posted an article titled, “Enterprise Search Engine Best Bets – Pros & Cons” (which was also referenced in Kas Thomas’ post). Good reading on both – go take a look at them!
My own opinion – I believe that best bets are an important piece of search and agree with Jon T’s comment above that their presence (and, hopefully, quality!) give users some confidence that there is some human intelligence going into the presentation of the search results as a whole. I also have to agree with Kas’s argument that search engines should be able to consistently place the “right” item at the top of results, but I do not believe any search engine is really able to today – there are still many issues to deal with (see details in my posts on coverage, identity, and relevance for my own insights on some of the major issues).
That being said, I also agree that you need to manage best bets in a way that does not cost your organization more than their value – or to manage them in a way that the value is realized in multiple ways.
Contrary to what Tim W says, and as I have written about in my posts on search analytics (especially in the use of search results usage), I do believe you can use search analytics to inform your best bets but they do not provide a complete solution by any means.
Managing Best Bets
From here on out, I’ll describe some of the ways best bets can be managed – the first few will be summary of what people shared on the SearchCoP community and then I’ll provide some more detail on how we have managed them. The emphasis (bolding) is my own to highlight some of what I think are important points of differentiation.
From Tim W:
“We have a company Intranet index; kind of a phone book for web sites (A B C D…Z). It’s been around for a long time. If you want your web site listed in the company index, it must be registered in our “Content Tracker” application. Basically, the Content Tracker allows content owners to register their web site name, URL, add a description, metadata and an expiration date. This simple database table drives the Intranet index. The content owner must update their record once per year or it expires out of the index.
This database was never intended for Enterprise Search but it has proven to be a great source for Best Bets. We point our ODBC Database Fetch (Autonomy crawler) at the SQL database for the Content Tracker and we got instant, user-driven, high quality Best Bets.
Instead of managing 150+ Best Bets myself, we now have around 800 user-managed Best Bets. They expire out of the search engine if the content owner doesn’t update their record once per year. It has proven very effective for web content. In effect, we’ve turned over management of Best Bets to the collective wisdom of the employees.”
From Jim S:
“We have added an enterprise/business group best bet key word/phrase meta data.
All documents that are best bet are hosted through our WCM and have a keyword meta tag added to indicate they are a best bet. This list is limited and managed through a steering team and search administrator. We primarily only do best bets for popular searches. Employee can suggest a best bet – both the term and the associated link(s). It is collaborative/wiki like but still moderated and in the end approved or rejected by a team. There is probably less than 1 best bet suggestion a month.
If a document is removed or deleted the meta data tag also is removed and the best bet disappears automatically.
Our WCM also has a required review date for all content. The date is adjustable so that content will be deactivated at a specific date if the date is not extended. This is great for posting information that has a short life as well as requiring content owners to interact with the content at least every 30 Months (maximum) to verify that the content is still relevant to the audience. The Content is not removed from the system, rather it’s deactivated (unpublished) so it no longer accessible and the dynamic links and search index automatically remove the invalid references. The content owner can reactivate it by setting the review date into the future.
If an external link (not one in our WCM) is classified as a best bet then a WCM redirect page is created that stores the best bet meta tag. Of course it has a review/expiration so the link doesn’t go on forever and our link testing can flag if the link is no longer responding. If the document is in the DMS it would rarely be deleted. In normal cases it would be archived and a archive note would be placed to indicate the change. Thus no broken links.
Good content engineering on the front end will help automate the maintenance on the back end to keep the quality in search high.“
The first process is external to the content and doesn’t require modifying the content (assuming I’m understanding Tim’s description correctly). There are obvious pros and cons to this approach.
By contrast, the second process embeds the “best bet” attribution in the content (perhaps more accurately in the content management system around the content) and also embeds the content in a larger management process – again, some obvious pros and cons to the approach.
Managing Best Bets at Novell
Now for a description of our process -The process and tools in place in our solution are similar to the description provided by Tim W. I spoke about this topic at the Enterprise Search Summit West in November 2007, so you might be able to find the presentation for it there (though I could not just now in a few minutes of searching).
With the search engine we use, the results displayed in best bets are actually just a secondary search performed when a user performs any search – the engine searches the standard corpus (whatever context the user has chosen, which would normally default to “everything”) and separately searches a specific index that include all content that is a potential best bet.
The top 5 (a number that’s configurable) results that match the user’s search from the best bets index are displayed above the regular results and are designated “best bets”.
How do items get into the best bets index, then? Similar to what Tim W describes, on our intranet, we have an “A-Z index” – in our case, it’s a web page that provides a list of all of the resources that have been identified as “important” at some point in the past by a user. (The A-Z index does provide category pages that provide subsets of links, but the main A-Z index includes all items so the sub-pages are not really relevant here.)
So the simple answer to, “How do items get into the best bets index?” is, “They are added to the A-Z index!” The longer answer is that users (any user) can request an item be added to the A-Z index and there is then a simple review process to get it into the A-Z index. We have defined some specific criteria for entries added to the A-Z, several of which are related to ensuring quality search results for the new item, so when a request is submitted, it is reviewed against these criteria and only added if it meets all of the criteria. Typically, findability is not something considered by the submitter, so there will be a cycle with the submitter to improve the findability of the item being added (normally, this would include improving the title of the item, adding keywords and a good description).
Once an item is added to the A-Z index, it is a potential best bet. The search engine indexes the items in the A-Z through a web crawler that is configured to start with the A-Z index page and goes just one link away from that (i.e., it only indexes items directly linked to from the A-Z index).
In this process, there is no way to directly map specific searches (keywords) to specific results showing up in best bets. The best bets will show up in the results for a given search based on normally calculated relevance for the search. However, the best bet population numbers only about 800 items instead of the roughly half million items that might show up in the regular results – as long as the targets in the A-Z index have good titles and are tagged with the proper keywords and description, they will normally show up in best bets results for those words.
Some advantages of this approach:
- This approach works with our search engine and takes advantage of a long-standing “solution” our users are used to (the A-Z index has long been part of our intranet and many users turn to the A-Z index whenever they need to find anything, so its importance is well-ingrained in the company).
- Given that the items in the A-Z index have been identified at some point in the past as “important”, we can arguably say that everything that should possibly be a best bet is included.
- We have a point in a process to enforce some findability requirements (when a new item is added).
- The items included can be any web resource, regardless of where it is (no need to be on our web site or in our CM system)
- This approach provides a somewhat automated way to keep the A-Z index cleaned up – the search engine identifies broken links as it indexes content and by monitoring those for the best bets index, we know when content included the A-Z has been removed.
- Because this approach depends on the “organic” results from the engine (just on a specially-selected subset of content), we do not have to directly manage keyword-to-result mapping – we delegate that to the content owner (by way of assigning appropriate keywords in the content).
Some disadvantages of this approach
- The tool we use to manage the A-Z index content is a database but, it is not integrated with our content management system. Most specifically, it does not take advantage of automated expiration (or notification about expiration).
- As a follow-on from the above point, there is no systematically enforced review cycle on individual items to ensure they are still relevant.
- Because this approach depends on the organic results from the engine, we can not directly map keywords to specific results. (Both a good and bad thing, I guess!)
- Because the index is generated using a web crawl (and not indexing a database directly for example), some targets (especially web applications) still end up not showing particular well because it might not be possible to have the home page of the application modified to include better keywords or descriptions or (in the face of our single sign-on solution), sometimes a complex set of redirects results in the crawler not indexing the “right” target.