Lee Romero

On Content, Collaboration and Findability

Archive for the ‘Search’ Category

Standard Measures for Enterprise Search – A proposal for a universal KPI

Sunday, February 28th, 2021

Having introduced some basic, standard, definitions in my previous post, in this one I am going to propose some standard measures derived from those that enable comparisons across solutions. These also are extremely useful for individual solutions where you, as an enterprise search manager, might want to have tools at hand to proactively improve your users’ experience.

A quick recap of what I defined before:

  • Search: A single action a user takes that retrieves a set of results. Initiating a searching effort, applying a sort to result, pagination, applying filters would typically all increment this metric.
  • Click: A user clicking on a result presented to them.
  • Search Session: A sequence of actions (clicks or searches) that are taken in order without changing the search term (more generally, the criteria of the search).
  • First Click: The first click within a search session.

Lost Clicks

The first derived measure is one I call “lost clicks”. This measures the raw number of search sessions that resulted in no click:

    \[\mbox{lost} \mbox{ clicks} = (\mbox{search sessions} - \mbox{first clicks})\]

This is a useful measure that tells you how many times, in total, users initiated a session but found nothing of interest to click on.

You can also think of this as an indicator that measures the number of total failed search sessions.

One more point I’ll make on this is that, because it is a raw number (not a ratio or percentage), it is not useful as a key performance indicator (KPI).

Abandonment rate

Now, finally, to my proposal for a standard measure of the quality of a search solution – a measure that, I think, can be usefully applied to all enterprise search solutions, can be used to drive improvement within a solution, and can be used to compare across such solutions.

That measure is “abandonment rate”, which I define as the percent of sessions that are ‘failed sessions’:

    \[\mbox{abandonment  rate} = {\mbox{lost clicks} \over \mbox{search sessions}}\]

which, after a bit of simplifying, I normally write as:

    \[\mbox{abandonment  rate} = 1 - ({\mbox{first clicks} \over \mbox{search sessions}})\]

This measure has some important advantages over a simpler click-rate model (e.g., [success rate] = [click] / [search]). For one thing, it avoids some simple problems that can be caused by a few anomalous users; for a second, it avoids the ‘trap’ of assuming a click is a success.

Anomalous usage patterns

There are two anomalous patterns I see every once in a while:

  1. A single dedicated user (or a small number of such users) might page through dozens or hundreds of pages of results (I actually have seen this before!) – generating a LOT of search actions – and yet click on nothing or just a result or two.
    • If every other user found something interesting to click on and did so on the first page of results, the click rate is still artificially lowered by these “extra” searches.
  2. Inversely, users who are in a ‘research mode’ of usage (not a known item search) will click on a lot of results (I have also seen instances where a single user clicks on 100s of results all in the same search session).
    • Even if no other user found anything interesting to click on, the click rate is still artificially raised by these “extra” clicks.

By using only the first click and also the search session as the denominator, these scenarios don’t come into play (note that because I am recommending still capturing the simpler ‘search’ and the simpler ‘click’ metrics, you can still do some interesting analyses with these!).

Bad Success and Good Abandonment

The second advantage I mentioned above is more of a philosophical one – the success rate measure as defined builds in more strongly that you are measuring user success. This is a strong statement.

By focusing on abandonment, I find it a more honest view – your metrics don’t build in an assumption that a click is likely a success but, instead, that a failure to find something of interest to click on is more clearly an indication of likely failure.

What do I mean?

When I consider the ideas of “success” and “failure” in a search solution, I always have to remind myself of the good and bad sides of both – what do I mean by that??

  • Good success – Good success is a click on a result that was actually useful and what the user needs to do their job. This, ultimately, is what you want to get to – however, because there is no way for a search solution to (at scale) know if any given result is “good” or “useful”, this is impossible to really measure.
  • Bad abandonment – This is the flip side – this is how I think of the experience where a user has a search session where they find nothing useful at all. Again, this is the clear definition of failure.

However, there are other possibilities to consider!

  • Bad success – This is when a user finds something that appears to be useful or what they need and they click on it, but it turns out to be something entirely different and not useful at all.
    • A classic example of bad success I have seen is in regard to my firm’s branding library (named ‘Brand Space’). For whatever reason, many intranet managers like to create image libraries in their sites and name them ‘Brand Space’ (I think this is because they think of this image library as their own instance of ‘Brand Space’). They then leave that image library exposed in search (we train them not to do so, but sometimes they don’t listen) and if an end user initiates a search session looking for Brand Space, they find the image library in results, click on it, and are likely disappointed (I imagine such a user thinking, “What is this useless web page?”)
    • A different way to think of this is in regard to the perspective of someone who is responsible for a particular type of content (let’s say benefits information for your company) – they may think they know what users *should* access when they search in particular ways and clicking on anything else is an instance of ‘bad success’. I get this but, as the manager of the search solution, I am not in the position of defining what users *should* click on – I cannot read their minds to understand intent.
  • Good abandonment – This is when a user finds the information they need right on the search results screen. Technically, such a session would count as ‘abandoned’ even though the user got what they needed.
    • This is exactly the scenario I mentioned in the definition of a ‘click’ in my last post where I would like to define how to measure this but have never been able to figure out a way to do so.

Getting back to my description of how measuring and tracking abandonment rate is better then a success rate – my assumption has been that good abandonment and bad success will always exist for your users, however, good abandonment is likely a much smaller percentage of sessions than bad success and, more importantly, it is much easier to “improve” your search by increasing bad success then decreasing good abandonment.

Conclusion

There is my proposal for a measure to be used to assess search solutions for the quality of the user experience – abandonment rate.

It is not perfect and it is still just an indicator but I have found it incredibly useful to actually drive action for improvement. I’ll share more on this in my next post.

Standard Measures for enterprise search

Sunday, February 7th, 2021

In my last few posts, I have commented on the lack of standard measures to use for enterprise search (leading to challenges of comparing various solutions to others among other things) and suggested some criteria for what standard measures to use.

In this post, I am going to propose a few basic measures that I think meet the criteria and that any enterprise search solution should be able to provide. The labels are not critical for these, but the meaning of them is, I think, very important.

Search

First, and most important, is a search. A search is a single action in which a user retrieves a set of results from the search engine. Different user experiences may “count” these events differently.

When a user starts the process (in my experience, typically with a search term typed into a box on a web page somewhere), that is a single search.

If that user navigates to a second page of results, that is another search. Navigating to a third page counts as yet another search, etc.

Applying a filter (if the user interface supports such) counts as yet another search.

Re-sorting results counts as yet another search.

In a browser-based experience, even a user simply doing a page refresh counts as another search (though I will also say that in this case, if the interface uses some kind of caching of results, this might not actually truly retrieve a new set of results from the search engine, so this one could be a bit “squishy”).

In a user experience with an infinite scroll, the act of a user scrolling to the bottom of one ‘chunk’ of results and thus triggering the interface to retrieve the next ‘chunk’ also counts as yet another search (this is effectively equivalent to paging through result except it doesn’t require any action by the user).

Click

The second basic measure is the click. A click is counted any time a user clicks on any results in the experience.

Depending on the implementation, differentiating the type of thing a user clicks on (an organic result or a ‘best bet’, etc.) can be useful – but I don’t consider that differentiation critical at the high level.

One thing to note here that I know is a gap – there are some scenarios where a user does not need to click on anything in the search results. The user might meet their information need simply by seeing the search results.

This could be because they just wanted to know if anything was returned at all. It could be because the information they need is visible right on the results screen (the classic example of this would be a search experience that shows people profiles and the display shows some pertinent piece of information like a phone number). In a sophisticated search experience that offers “answers” to question, the answer might be displayed right on the results screen. I have been puzzled about how to measure this scenario for a while. Other than some mechanism on the interface that allows a user to take some action to acknowledge that they achieved there need (“Was this answer useful?”), I’m not sure what that is. Very interested if others have solved this puzzle.

Search Session

A third important metric is the search session. This is closely related to the search metric, but I do think that it is important to differentiate.

A search session is a series of actions a user takes that, together, constitute an attempt to satisfy a specific information need.

This definition, though, is really not deterministically measurable. There is no meaningful way (unless you can read the user’s mind) to know when they are “done”.

One possibility is to equate a search session to a visit – I find a good definition for this on Wikipedia in the Web analytics article:

A visit or session is defined as a series of page requests or, in the case of tags, image requests from the same uniquely identified client.

In the current solution I am working with, however, we have defined a search session to be a series of actions taken in sequence where the user does not change their search term. The user might navigate through a series of pages of results, reorder them, apply multiple filters, click on one or more results, etc., but, none of these count as another search session.

The rationale for this is that, based on anecdotal discussions with users, users tend to think of an effort using a single search term as a notional “search”. If the user fails with that term, they try another, but that is a different “search”.

Obviously, this is not truly accurate in all situations – if we could meaningfully detect (at scale, meaning across all of our activity) when changing the search term is really a restatement of the same information need vs. a completely different information need, we could do something more accurate, but we are not there, yet.

First Click

The last basic measure I propose is the first click.

A first click is counted the first time a user clicks on a result within a search session. If a user clicks on multiple things within a search session, they are all still counted as clicks, but not as first clicks.

If the user starts a new search session (which, in the current solution I work with, means they have changed their search term), then, if they click on some result, that is another first click.

Conclusion and what’s next

That is the set of basic measures that I think could be useful to establish as a standard.

Next steps – I hope to engage with others working in this domain to refine these and tighten them up (especially a search session). I hope to make some contacts through the Enterprise Search Engine Professionals group on LinkedIn and perhaps other communities for this. If you are interested, please let me know!

In my next post, I will be sharing definitions of some important metrics derived from the basic measures above that I use and provide some examples of each.

Criteria for Standard Measures of Enterprise Search

Sunday, January 31st, 2021

In my last post, I wondered about the lack of meaningful standards for evaluating enterprise search implementations.

I did get some excellent comments on the post and also some very useful commentary from a LinkedIn discussion about this topic – I would recommend you read through that discussion. Udo Kruschwitz and Charlie Hull both provided links to some very good resources.

In this post, I thought I would describe what I think to be some important attributes of any standard measures that could be adopted. Here I will be addressing the specific actions to measure – in a subsequent post I will write about how these can be used to actually evaluate a solution.

Measurable

To state the obvious, we need to have metrics that are measurable and objective. Ideally, metrics that directly reflect user interaction with the search solution.

Measures that depend on subjective evaluation or get feedback from users through means other than their direct use of the tool can be very useful but introduce problems in terms of interpretation differences and sustainability.

For example, a feedback function built into the interface (“Are these results useful?” or even a more specific, “Is this specific result useful for you here?”) can provide excellent insight but are used so little that the data is not useful overall.

Surveys of users inevitably fall into the problem of faulty or biased memory – in my experience, users have such a negative perception of enterprise search that individual negative experiences will overwhelm positive experiences with the search when you ask them to recall and assess their experience a day or week after their usage.

Common / Useful to compare implementations

Another important consideration is that a standard for evaluating enterprise search should include aspects of search that are common across the broad variety of solutions you might see.

In addition, they should lend themselves to comparing different solutions in a useful way.

Some implementations might be web-based (in my experience, this is by far the most common way to make enterprise search available). Some might be based on a desktop application or mobile app. Some implementations might depend only on users enterprise search terms to start a search session; some might only support searching based on search terms (no filtering or refining at all). Some implementations might provide a “search as you type” (showing results immediately based on part of what the user has entered). Many variations to consider here.

I would want to have measures that allow me to compare one solution to another – “Is this one better than that one?” “Are there specific user needs where this solution is better than that one?”

Likely to be insightful

Another obvious aspect is that we want to include measures that are likely to be useful.

Useful in what way, though?

My first thought is that it must measure if the solution is useful for the users – does it meet the users’ needs? (With search, I would simplify this to “does it provide the information the user needs efficiently?” but there are likely a lot of other ways to define “useful” even within a search experience.

Operationalizable

I would want all measures I use to be consistently available (no need to “take a measurement” at a given time) and also to not depend on someone actively having to “take a measurement”.

As mentioned above, measures that directly reflect what happens in the user experience are what I would be looking for. In this case, I would add in that the measures should be taken directly from the user experience – data captured into a search log file somewhere or captured via some other means.

This provides a data set that can be reviewed and used at basically any time and which (other than maintaining the system capturing the measurements) don’t require any effort to capture and maintain – the users use the search solution and their activities are captured.

Usable for overall and when broken down by dimensions

Finally, I would expect that measures would support analysis at broad scales and also should support the ability to drill in to details and use the same measures?

Examples of “broad scale” applicability: How good is this search solution overall? How good is my search solution in comparison to the overall industry average? How good are search solutions supporting the needs of users in the XYZ industry? How good are search solutions at supporting “known item” searching in comparison with “exploratory searching”?

Examples of drilling in: Within my user base, how successful are my users by department? How useful is the search solution in different topic areas of content? How good are results for individual, specific search criteria?

Others?

I’m sure I am missing a lot of potential criteria here – What would you add? Remove? Edit?

Evaluating enterprise search – standards?

Monday, January 18th, 2021

Over the past several years of working very closely with the enterprise search solution at Deloitte, I have tried to look “outside” as best as I can in order to understand what others in the industry are doing to evaluate their solutions in order to understand where ours ‘fits’.

I’ve attended a number of conferences and webcasts and read papers (many, I’ll admit, that are highlighted by Martin White on Twitter. I can’t recommend a follow of Martin enough!)

One thing I have never found is any common way to evaluate or talk about enterprise search solutions. I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions.

If we could compare in a systematic way, we could start to understand how to do things like:

  • Identify common use cases that are visible in user behavior (via metrics)
  • Compare how ‘good’ different solutions are at meeting the core need (an employee needs to access some resource to do their job)
  • Compare different industries approaches to information seeking (again, as identified by user behavior via metrics) – for example, do users search differently in industrial companies vs. professional services companies vs. research companies?

Why do we not have a common set of definitions?

One possibility is certainly that I have still not read up enough on the topic – perhaps there is a common set of definitions – if so, feel free to share.

Another possibility is that this is a result of dependency on the metrics that are implemented within the search solutions enterprises are using. I have found that these are useful but they don’t come with a lot of detail or clarity of definition. And, more specifically, they don’t seem common across products. That said, I have relatively limited exposure to multiple search solutions – Again, I would be interested in insights from those who have (perhaps any consultants working in this space?)

And, one more possible driver behind a lack of commonality is the proprietary nature of most implementations. I try to speak externally as frequently as I can, but I am always hesitant (and have been coached) to not be too detailed on the implementation.

I do plan to put up a small series here, though, with some of the more elemental components of our metrics implementation for comparison with anyone who cares to share.

More soon!

Language change over time in your search log

Monday, October 10th, 2011

This is a second post in a series I have planned about the language found throughout your search log – all the way into the “long tail” and how it might or might not be feasible to understand it all.

My previous post, “80-20: The lie in your search log?“, highlighted how the slope of “short head” of your search terms may not be as steep as anecdotes would say.  That is, there can be a lot less commonality within a particular time range among even the most common terms in your search log than you might expect.

After writing that post, I began to wonder about the overall re-use of terms over periods of time.

In other words:

Even while commonality of re-using terms within a month is relatively low, how much commonality do we see in our users’ language (i.e., search terms) from month to month?

To answer this, I needed to take the entire set of terms for a month and compare them with the entire set from the next month and determine the overlap and then compare the second month’s set of terms to a third month’s, and so on.  Logically not a hard problem but quite a challenge in practice due to the volume of data I was manipulating (large only in the face of the tools I have to manipulate it).

So I pulled together every single term used over a period of about 18 months and broke them into the set used for each of those months and performed the comparison.

Before getting into the details, a few details to share for context about the search solution I’m writing about here:

  • The average number of searches performed each month was almost 123,000.
  • The average number of distinct terms during this period was just under 53,000.
  • This results in an average of about 2.3 search for each distinct term

My expectation was that comparing the entire set of terms from one month to the next would show a relatively high percentage of overlap.  What I found was not what I expected.

If you look at the unique terms and their overlap, surprisingly, the average overlap between months was a shockingly low 13.2%.  In other words, over 86% of the terms in any given month were not used at all in the

Month to Month Re-Use of Search Terms

previous month.

If you look at the total searches performed and the percent of searches performed with terms from the prior month, this goes up to an average of 36.2% – reflecting that the terms that are re-used in a subsequent month among the most common terms overall.

Month to Month Re-Use of Search Terms

As you can see, the amount of commonality from month-to-month among the terms used is very low.

What can you draw from this observation?

In a brief discussion about this with noted search analytics expert Lou Rosenfeld, his reaction was that this represented a significant amount of change in the information needs of the users of the system – significant enough to be surprising.

Another conclusion I draw from this is that it provides another reason why it is very hard to meaningfully improve search across the language of your users.  Based on my previous post on the flatness of the curve of term use within a month, we know that it we need to look at a pretty significant percentage of distinct terms each month to account for a decent percentage of all searches – 12% of distinct terms to account for only 50% of searches.  In our search solution, that 12% doesn’t seem that large until you realize it is still represents about 6,000 distinct terms.

Coupling that with the observation from the analysis here means that even if you review those terms for a given month, you will likely need to review a significant percentage of brand new terms the next month, and so on.  Not an easy task.

Having established just how challenging this can be, my next few posts will provide some ideas for grappling with the challenges.

In the meantime, if you have any insight on similar statistics from your solution (or statistics about the shape of the search log curve I previously wrote above), please feel free to share here, on the SearchCoP on Yahoo! groups or on the Enterprise Search Engine Professionals group on LinkedIn – I would very much like to compare numbers to see if we can identify meaningful generalizations from different solution.

The Findability Gap by Lou Rosenfeld

Friday, September 23rd, 2011

Lou Rosenfeld has just published a great presentation I would highly recommend for anything working in the search space:  The Findability Gap.

It provides a great picture of the overall landscape of the problem (it’s not just search, after all!).

I especially liked slide 4 – a very telling illustration of the challenge we face in intelligently making information available to our users.

Re: Slide 24 – As I’ve written about before, I would say that the 80/20 rule is more than just “not quite accurate”.  But that’s mincing words.

Overall, a highly recommended read.

KMers.org Chat on the Importance of Search in your KM Solution

Tuesday, June 14th, 2011

Last week, I moderated a discussion for the weekly KMers.org Twitter chat about “The Importance of Search in your KM Solution”.

My intent was to try to get an understanding about how important search is relative to other components of a KM search (connecting people, collecting and managing content, etc.).

It was a good discussion with about a dozen or so people taking part (that I could tell).

You can read through the transcript of the session here.   Let me know what you think on the topic!

During the discussion, a great question came up about measuring the success of your search solution (thanks to Ed Dale) which I thought deserved its own discussion, so I have submitted a suggestion for a new topic for an upcoming KMers.org chat.

Please visit the suggestion here and vote for it!

80-20: The lie in your search log?

Saturday, November 13th, 2010

Recently, I have been trying to better understand the language in use by our users in the search solution we use, and in order to do that, I have been trying to determine what tools and techniques one might use to do that. This is the first post in a planned series about this effort.

I have many goals in pursuing this.  The primary goal has been to be able to identify trends from the whole set of language in use by users (and not just the short head).  This goals supports the underlying business desire of identifying content gaps or (more generally) where the variety of content available in certain categories does not match with the variety expected by users (i.e., how do we know when we need to target the creation and publication of specific content?)

Many approaches to this do focus on the short head – typically the top N terms, where N might be 50 or 100 or even 500 (some number that’s manageable).  I am interested in identifying ways to understand the language through the whole long tail as well.

As I have dug into this, I realized an important aspect of this problem is to understand how much commonality there is to the language in use by users and also how much the language in use by users changes over time – and this question leads directly to the topic at hand here.

Search Term Usage

Chart 1

There is an anecdote I have heard many times about the short head of your search log that “80 percent of your searches are accounted for by the top 20% most commonly-used terms“.  I now question this and wonder what others have seen.

I have worked closely with several different search solutions in my career and the three I have worked most closely with (and have most detailed insight on) do not come even close to the above assertion.  Chart 1 shows the usage curve for one of these.  The X axis is the percent of distinct terms (ordered by use) and the Y axis shows the percent of all searches accounted for by all terms up to X.

From this chart, you can see that it takes approximately 55% of distinct terms to account for 80% of all searches – that is a lot of terms!

This curve shows the usage for one month – I wondered about how similar this would be for other months and found (for this particular search solution) that the curves for every month were basically the exact same!

Wondering if this was an anomaly, I looked at a second search solution I have close access to to wonder if it might show signs of the “80/20” rule.  Chart 2 adds the curve for this second solution (it’s the blue curve – the higher of the two).

Chart 2

Chart 2

In this case, you will find that the curve is “higher” – it reaches 80% of searches at about 37% of distinct terms.  However, it is still pretty far from the “80/20” rule!

After looking at this data in more detail, I have realized why I have always been troubled at the idea of paying close attention to only the so-called “short head” – doing so leaves out an incredible amount of data!

In trying to understand the details of why, even though neither is close to adhering to the “80/20” rule, the usage curves are so different, I realize that there are some important distinctions between the two search solutions:

  1. The first solution is from a knowledge repository – a place where users primarily go in order to do research; the second is for a firm intranet – much more focused on news and HR type of information.
  2. The first solution provides “search as you type” functionality (showing a drop-down of actual search results as the user types), while the second provides auto-complete (showing a drop-down of possible terms to use).  The auto-complete may be encouraging users to adopt more commonality.

I’m not sure how (or really if) these factor into the shape of these curves.

In understanding this a bit better, I hypothesize two things:  1) the shape of this curve is stable over time for any given search solution, and 2) the shape of this curve tells you something important about how you can manage your search solution.  I am planning to dig more to answer hypothesis #1.

Questions for you:

  • Have you looked at term usage in your search solution?
  • Can you share your own usage charts like the above for your search solution and describe some important aspects of your solution?  Insight on more solutions might help answer my hypothesis #2.
  • Any ideas on what the shape of the curve might tell you?

I will be writing more on these search term usage curves in my next post as I dig more into the time-stability of these curves.

Best Bet Governance

Monday, February 22nd, 2010

My first post back after too-long a period of time off.  I wanted to jump back in and share some concrete thoughts on best bet governance.

I’ve previously written about best bets and how I thought, while not perfect, they were an important part of a search solution.  In that post, I also described the process we had adopted for managing best bets, which was a relatively indirect means supported by the search engine we used for the search solution.

Since moving employers, I now have responsibility for a local search solution as well as input on an enterprise search solution where neither of the search engines supports a similar model.  Instead, both support the (more typical?) model where you identify particular search terms that you feel need to have a best bet and you then need to identify a specific target (perhaps multiple targets) for those search terms.

This model offers some advantages such as specificity in the results and the ability to actively determine what search terms have a best bet that will show.

This model also offers some disadvantages, the primary one (in my mind) being that they must be managed – you must have a means to identify which terms should have best bets and which targets those terms should show as a best bet.  This implies some kind of manual management, which, in resource-constrained environments, can be a challenge.  As noted in my previous article, others have provided insight about how they have implemented and how they manage best bets.

Now having responsibility for a search solution requiring manual management of best bets, we’ve faced the same questions of governance and management and I thought I would share the governance model we’ve adopted.  I did review many of the previous writings on this to help shape these, so thanks to those who have written before on the topic!

Our governance model is largely based on trying to provide a framework for consistency and usability of our best bets.  We need some way to ensure we do not spend inordinate time on managing requests while also ensuring that we can identify new, valuable search terms and targets for best bets.

Without further ado, here is an overview of the governance we are using:

  • We will accept best bet requests from all users, though most requests come from site publishers on our portal.  Most of our best bets have web sites as targets, though about 30% have individual pieces of published content (documents) as targets.  As managers of the search solution, my team will also identify best bets when appropriate.
  • When we receive a request for a new best bet, we review the request against the following the following criteria:
    • No more than five targets can be identified for any one search term, though we prefer to keep it to one or two targets.
      • Any request for a best bet that would result in more than 2 targets for the search term forces a review of usage of the targets (usage is measured by our web analytics solution for both sites and published content).
      • The overall usage of the targets will identify if one or more targets should be dropped.
    • For a given target, no more than 20 individual search terms can be identified.  Typically, we try to keep this to fewer than 5 when possible.
    • If a target is identified as a best bet target that has not had a best bet search term associated with it previously, we confirm that it is either a highly used piece of content or that it is a significant new piece that is highly known or publicized (or may soon be by way of some type of marketing).
    • We also review the search terms identified for the best bet.  We will not use search terms with little to no usage during the previous 3 months.
    • We will not set up a best bet search term that matches the title of the target.  The relevancy algorithm for our search engine heavily weights titles, so this is not necessary.
    • We do prefer that the best bet search terms do have a logical connection to the title or summary of the target.  This ensures that a user will understand the connection between their search terms and a resulting best bet.  This is not a hard requirement, but a preference.  We do allow for spelling variants, synonyms, pluralized forms, etc.
    • We prefer terms that use words from our global taxonomy.
  • Our governance (management process, really) for managing best bets includes:.
    • Our search analyst reviews the usage of each best bet term.
      • If usage over an extended time is too low to warrant the best bet term, it is removed.
    • We also plan to use path analysis (pending some enhancements needed as this is written) to determine if, for specific terms, the best bet selections are used preferentially.  If that is found to not be the case, our intent is that the best bet target is removed.
    • We have integrated the best bet management into both our site life cycle process and our content life cycle
      • With the first, when we are retiring a site or changing the URL of a site we know to remove or update the best bet target
      • With the second, as content is retired, the best bets are removed
      • In each of these cases, we also evaluate the terms to see if there could be other good targets to use.

The one interesting experience we’ve had so far with this governance model is that we get a lot of push back from site publishers who want to provide a lengthy laundry list of terms for their site, even when 75% of that list is never used (or at least in a twelve month period we’ll sometimes check).  They seem convinced that there is value in setting up best bets for terms even when you can show that there is none.  We are currently making changes in the way we manage best bets and also in how we can use these desirable terms to enhance the organic results directly.  More on that later.

There you have our current governance model.  Not too fancy or complicated and still not ideal, but it’s working for us and we recognize that it’s a work in progress.

Now that I have the “monkey off my back” in terms of getting a new post published, I plan to re-start regular writing.  Check back soon for more on search, content management and taxonomy!

Enterprise Search Best Bets – a good enough practice?

Tuesday, February 3rd, 2009

Last summer, I read the article by Kas Thomas from CMS Watch titled “Best Bets – a Worst Practice” with some interest. I found his thesis to be provocative and posted a note to the SearchCoP community asking for other’s insights on the use of Best Bets. I received a number of responses taking some issue with Kas’ concept of what best bets is and some also some responses describing different means to manage best bets (hopefully without requiring the “serious amounts of human intervention” described by Kas.

In this post, I’ll provide a summary of sorts and also describe some of the ways described for managing best bets and also the way we have managed best bets.

Kas’ thesis is that best bets are not a good practice because they are largely a hack layered on top of a search engine and require significant manual intervention. Further, if your search engine isn’t already providing access to appropriate “best bets” for queries, you should get yourself a new search engine.

Are Best Bets Worth the Investment?

Some of the most interesting comments from the thread of discussion on the SearchCoP include (I’ll try to provide as cohesive picture of sentiment as I can but will only provide parts of the discussion – if I have portrayed intent incorrectly – that’s my fault and not the original author):

From Tim W:

“Search analytics are not used to determine BB … BB are links commonly used, enterprise resources that the search engine may not always rank highly because for a number of reasons. For example, lack of metadata, lack of links to the resource and content that does not reflect how people might look for the document. Perhaps it is an application and not a document at all.”

From Walter U:

“…manual Best Bets are expensive and error-prone. I consider them a last resort.”

From Jon T:

“Best Bets are not just about pushing certain results to the top. It is also about providing confidence in the results to users.

If you separate out Best Bets from the automatic results, it will show a user that these have been manually singled out as great content – a sign that some quality review has been applied.”

From Avi R:

“Best Bets can be hard to manage, because they require resources.

If no one keeps checking on them, they become stale, full of old content and bad links.

Best Bets are also incredibly useful.

They’re good for linking to content that can’t be indexed, and may even be on another site entirely. They’re good for dealing with … all the sorts of things that are obvious to humans but don’t fit the search paradigm.”

So, lots of differing opinions on best bets and their utility, I guess.

A few more pieces of background for you to consider: Walter U has posted on his blog (Most Casual Observer) a great piece titled “Good to Great Search” that discusses best bets (among other things); and, Dennis Deacon posted an article titled, “Enterprise Search Engine Best Bets – Pros & Cons” (which was also referenced in Kas Thomas’ post). Good reading on both – go take a look at them!

My own opinion – I believe that best bets are an important piece of search and agree with Jon T’s comment above that their presence (and, hopefully, quality!) give users some confidence that there is some human intelligence going into the presentation of the search results as a whole. I also have to agree with Kas’s argument that search engines should be able to consistently place the “right” item at the top of results, but I do not believe any search engine is really able to today – there are still many issues to deal with (see details in my posts on coverage, identity, and relevance for my own insights on some of the major issues).

That being said, I also agree that you need to manage best bets in a way that does not cost your organization more than their value – or to manage them in a way that the value is realized in multiple ways.

Contrary to what Tim W says, and as I have written about in my posts on search analytics (especially in the use of search results usage), I do believe you can use search analytics to inform your best bets but they do not provide a complete solution by any means.

Managing Best Bets

From here on out, I’ll describe some of the ways best bets can be managed – the first few will be summary of what people shared on the SearchCoP community and then I’ll provide some more detail on how we have managed them. The emphasis (bolding) is my own to highlight some of what I think are important points of differentiation.

From Tim W:

“We have a company Intranet index; kind of a phone book for web sites (A B C D…Z). It’s been around for a long time. If you want your web site listed in the company index, it must be registered in our “Content Tracker” application. Basically, the Content Tracker allows content owners to register their web site name, URL, add a description, metadata and an expiration date. This simple database table drives the Intranet index. The content owner must update their record once per year or it expires out of the index.

This database was never intended for Enterprise Search but it has proven to be a great source for Best Bets. We point our ODBC Database Fetch (Autonomy crawler) at the SQL database for the Content Tracker and we got instant, user-driven, high quality Best Bets.

Instead of managing 150+ Best Bets myself, we now have around 800 user-managed Best Bets. They expire out of the search engine if the content owner doesn’t update their record once per year. It has proven very effective for web content. In effect, we’ve turned over management of Best Bets to the collective wisdom of the employees.”

From Jim S:

“We have added an enterprise/business group best bet key word/phrase meta data.

All documents that are best bet are hosted through our WCM and have a keyword meta tag added to indicate they are a best bet. This list is limited and managed through a steering team and search administrator. We primarily only do best bets for popular searches. Employee can suggest a best bet – both the term and the associated link(s). It is collaborative/wiki like but still moderated and in the end approved or rejected by a team. There is probably less than 1 best bet suggestion a month.

If a document is removed or deleted the meta data tag also is removed and the best bet disappears automatically.

Our WCM also has a required review date for all content. The date is adjustable so that content will be deactivated at a specific date if the date is not extended. This is great for posting information that has a short life as well as requiring content owners to interact with the content at least every 30 Months (maximum) to verify that the content is still relevant to the audience. The Content is not removed from the system, rather it’s deactivated (unpublished) so it no longer accessible and the dynamic links and search index automatically remove the invalid references. The content owner can reactivate it by setting the review date into the future.

If an external link (not one in our WCM) is classified as a best bet then a WCM redirect page is created that stores the best bet meta tag. Of course it has a review/expiration so the link doesn’t go on forever and our link testing can flag if the link is no longer responding. If the document is in the DMS it would rarely be deleted. In normal cases it would be archived and a archive note would be placed to indicate the change. Thus no broken links.

Good content engineering on the front end will help automate the maintenance on the back end to keep the quality in search high.

The first process is external to the content and doesn’t require modifying the content (assuming I’m understanding Tim’s description correctly). There are obvious pros and cons to this approach.

By contrast, the second process embeds the “best bet” attribution in the content (perhaps more accurately in the content management system around the content) and also embeds the content in a larger management process – again, some obvious pros and cons to the approach.

Managing Best Bets at Novell

Now for a description of our process -The process and tools in place in our solution are similar to the description provided by Tim W. I spoke about this topic at the Enterprise Search Summit West in November 2007, so you might be able to find the presentation for it there (though I could not just now in a few minutes of searching).

With the search engine we use, the results displayed in best bets are actually just a secondary search performed when a user performs any search – the engine searches the standard corpus (whatever context the user has chosen, which would normally default to “everything”) and separately searches a specific index that include all content that is a potential best bet.

The top 5 (a number that’s configurable) results that match the user’s search from the best bets index are displayed above the regular results and are designated “best bets”.

How do items get into the best bets index, then? Similar to what Tim W describes, on our intranet, we have an “A-Z index” – in our case, it’s a web page that provides a list of all of the resources that have been identified as “important” at some point in the past by a user. (The A-Z index does provide category pages that provide subsets of links, but the main A-Z index includes all items so the sub-pages are not really relevant here.)

So the simple answer to, “How do items get into the best bets index?” is, “They are added to the A-Z index!” The longer answer is that users (any user) can request an item be added to the A-Z index and there is then a simple review process to get it into the A-Z index. We have defined some specific criteria for entries added to the A-Z, several of which are related to ensuring quality search results for the new item, so when a request is submitted, it is reviewed against these criteria and only added if it meets all of the criteria. Typically, findability is not something considered by the submitter, so there will be a cycle with the submitter to improve the findability of the item being added (normally, this would include improving the title of the item, adding keywords and a good description).

Once an item is added to the A-Z index, it is a potential best bet. The search engine indexes the items in the A-Z through a web crawler that is configured to start with the A-Z index page and goes just one link away from that (i.e., it only indexes items directly linked to from the A-Z index).

In this process, there is no way to directly map specific searches (keywords) to specific results showing up in best bets. The best bets will show up in the results for a given search based on normally calculated relevance for the search. However, the best bet population numbers only about 800 items instead of the roughly half million items that might show up in the regular results – as long as the targets in the A-Z index have good titles and are tagged with the proper keywords and description, they will normally show up in best bets results for those words.

Some advantages of this approach:

  • This approach works with our search engine and takes advantage of a long-standing “solution” our users are used to (the A-Z index has long been part of our intranet and many users turn to the A-Z index whenever they need to find anything, so its importance is well-ingrained in the company).
  • Given that the items in the A-Z index have been identified at some point in the past as “important”, we can arguably say that everything that should possibly be a best bet is included.
  • We have a point in a process to enforce some findability requirements (when a new item is added).
  • The items included can be any web resource, regardless of where it is (no need to be on our web site or in our CM system)
  • This approach provides a somewhat automated way to keep the A-Z index cleaned up – the search engine identifies broken links as it indexes content and by monitoring those for the best bets index, we know when content included the A-Z has been removed.
  • Because this approach depends on the “organic” results from the engine (just on a specially-selected subset of content), we do not have to directly manage keyword-to-result mapping – we delegate that to the content owner (by way of assigning appropriate keywords in the content).

Some disadvantages of this approach

  • The tool we use to manage the A-Z index content is a database but, it is not integrated with our content management system. Most specifically, it does not take advantage of automated expiration (or notification about expiration).
  • As a follow-on from the above point, there is no systematically enforced review cycle on individual items to ensure they are still relevant.
  • Because this approach depends on the organic results from the engine, we can not directly map keywords to specific results. (Both a good and bad thing, I guess!)
  • Because the index is generated using a web crawl (and not indexing a database directly for example), some targets (especially web applications) still end up not showing particular well because it might not be possible to have the home page of the application modified to include better keywords or descriptions or (in the face of our single sign-on solution), sometimes a complex set of redirects results in the crawler not indexing the “right” target.