Lee Romero

On Content, Collaboration and Findability

Standard Measures for Enterprise Search – A proposal for a universal KPI

Sunday, February 28th, 2021

Having introduced some basic, standard, definitions in my previous post, in this one I am going to propose some standard measures derived from those that enable comparisons across solutions. These also are extremely useful for individual solutions where you, as an enterprise search manager, might want to have tools at hand to proactively improve your users’ experience.

A quick recap of what I defined before:

  • Search: A single action a user takes that retrieves a set of results. Initiating a searching effort, applying a sort to result, pagination, applying filters would typically all increment this metric.
  • Click: A user clicking on a result presented to them.
  • Search Session: A sequence of actions (clicks or searches) that are taken in order without changing the search term (more generally, the criteria of the search).
  • First Click: The first click within a search session.

Lost Clicks

The first derived measure is one I call “lost clicks”. This measures the raw number of search sessions that resulted in no click:

    \[\mbox{lost} \mbox{ clicks} = (\mbox{search sessions} - \mbox{first clicks})\]

This is a useful measure that tells you how many times, in total, users initiated a session but found nothing of interest to click on.

You can also think of this as an indicator that measures the number of total failed search sessions.

One more point I’ll make on this is that, because it is a raw number (not a ratio or percentage), it is not useful as a key performance indicator (KPI).

Abandonment rate

Now, finally, to my proposal for a standard measure of the quality of a search solution – a measure that, I think, can be usefully applied to all enterprise search solutions, can be used to drive improvement within a solution, and can be used to compare across such solutions.

That measure is “abandonment rate”, which I define as the percent of sessions that are ‘failed sessions’:

    \[\mbox{abandonment  rate} = {\mbox{lost clicks} \over \mbox{search sessions}}\]

which, after a bit of simplifying, I normally write as:

    \[\mbox{abandonment  rate} = 1 - ({\mbox{first clicks} \over \mbox{search sessions}})\]

This measure has some important advantages over a simpler click-rate model (e.g., [success rate] = [click] / [search]). For one thing, it avoids some simple problems that can be caused by a few anomalous users; for a second, it avoids the ‘trap’ of assuming a click is a success.

Anomalous usage patterns

There are two anomalous patterns I see every once in a while:

  1. A single dedicated user (or a small number of such users) might page through dozens or hundreds of pages of results (I actually have seen this before!) – generating a LOT of search actions – and yet click on nothing or just a result or two.
    • If every other user found something interesting to click on and did so on the first page of results, the click rate is still artificially lowered by these “extra” searches.
  2. Inversely, users who are in a ‘research mode’ of usage (not a known item search) will click on a lot of results (I have also seen instances where a single user clicks on 100s of results all in the same search session).
    • Even if no other user found anything interesting to click on, the click rate is still artificially raised by these “extra” clicks.

By using only the first click and also the search session as the denominator, these scenarios don’t come into play (note that because I am recommending still capturing the simpler ‘search’ and the simpler ‘click’ metrics, you can still do some interesting analyses with these!).

Bad Success and Good Abandonment

The second advantage I mentioned above is more of a philosophical one – the success rate measure as defined builds in more strongly that you are measuring user success. This is a strong statement.

By focusing on abandonment, I find it a more honest view – your metrics don’t build in an assumption that a click is likely a success but, instead, that a failure to find something of interest to click on is more clearly an indication of likely failure.

What do I mean?

When I consider the ideas of “success” and “failure” in a search solution, I always have to remind myself of the good and bad sides of both – what do I mean by that??

  • Good success – Good success is a click on a result that was actually useful and what the user needs to do their job. This, ultimately, is what you want to get to – however, because there is no way for a search solution to (at scale) know if any given result is “good” or “useful”, this is impossible to really measure.
  • Bad abandonment – This is the flip side – this is how I think of the experience where a user has a search session where they find nothing useful at all. Again, this is the clear definition of failure.

However, there are other possibilities to consider!

  • Bad success – This is when a user finds something that appears to be useful or what they need and they click on it, but it turns out to be something entirely different and not useful at all.
    • A classic example of bad success I have seen is in regard to my firm’s branding library (named ‘Brand Space’). For whatever reason, many intranet managers like to create image libraries in their sites and name them ‘Brand Space’ (I think this is because they think of this image library as their own instance of ‘Brand Space’). They then leave that image library exposed in search (we train them not to do so, but sometimes they don’t listen) and if an end user initiates a search session looking for Brand Space, they find the image library in results, click on it, and are likely disappointed (I imagine such a user thinking, “What is this useless web page?”)
    • A different way to think of this is in regard to the perspective of someone who is responsible for a particular type of content (let’s say benefits information for your company) – they may think they know what users *should* access when they search in particular ways and clicking on anything else is an instance of ‘bad success’. I get this but, as the manager of the search solution, I am not in the position of defining what users *should* click on – I cannot read their minds to understand intent.
  • Good abandonment – This is when a user finds the information they need right on the search results screen. Technically, such a session would count as ‘abandoned’ even though the user got what they needed.
    • This is exactly the scenario I mentioned in the definition of a ‘click’ in my last post where I would like to define how to measure this but have never been able to figure out a way to do so.

Getting back to my description of how measuring and tracking abandonment rate is better then a success rate – my assumption has been that good abandonment and bad success will always exist for your users, however, good abandonment is likely a much smaller percentage of sessions than bad success and, more importantly, it is much easier to “improve” your search by increasing bad success then decreasing good abandonment.

Conclusion

There is my proposal for a measure to be used to assess search solutions for the quality of the user experience – abandonment rate.

It is not perfect and it is still just an indicator but I have found it incredibly useful to actually drive action for improvement. I’ll share more on this in my next post.

Standard Measures for enterprise search

Sunday, February 7th, 2021

In my last few posts, I have commented on the lack of standard measures to use for enterprise search (leading to challenges of comparing various solutions to others among other things) and suggested some criteria for what standard measures to use.

In this post, I am going to propose a few basic measures that I think meet the criteria and that any enterprise search solution should be able to provide. The labels are not critical for these, but the meaning of them is, I think, very important.

Search

First, and most important, is a search. A search is a single action in which a user retrieves a set of results from the search engine. Different user experiences may “count” these events differently.

When a user starts the process (in my experience, typically with a search term typed into a box on a web page somewhere), that is a single search.

If that user navigates to a second page of results, that is another search. Navigating to a third page counts as yet another search, etc.

Applying a filter (if the user interface supports such) counts as yet another search.

Re-sorting results counts as yet another search.

In a browser-based experience, even a user simply doing a page refresh counts as another search (though I will also say that in this case, if the interface uses some kind of caching of results, this might not actually truly retrieve a new set of results from the search engine, so this one could be a bit “squishy”).

In a user experience with an infinite scroll, the act of a user scrolling to the bottom of one ‘chunk’ of results and thus triggering the interface to retrieve the next ‘chunk’ also counts as yet another search (this is effectively equivalent to paging through result except it doesn’t require any action by the user).

Click

The second basic measure is the click. A click is counted any time a user clicks on any results in the experience.

Depending on the implementation, differentiating the type of thing a user clicks on (an organic result or a ‘best bet’, etc.) can be useful – but I don’t consider that differentiation critical at the high level.

One thing to note here that I know is a gap – there are some scenarios where a user does not need to click on anything in the search results. The user might meet their information need simply by seeing the search results.

This could be because they just wanted to know if anything was returned at all. It could be because the information they need is visible right on the results screen (the classic example of this would be a search experience that shows people profiles and the display shows some pertinent piece of information like a phone number). In a sophisticated search experience that offers “answers” to question, the answer might be displayed right on the results screen. I have been puzzled about how to measure this scenario for a while. Other than some mechanism on the interface that allows a user to take some action to acknowledge that they achieved there need (“Was this answer useful?”), I’m not sure what that is. Very interested if others have solved this puzzle.

Search Session

A third important metric is the search session. This is closely related to the search metric, but I do think that it is important to differentiate.

A search session is a series of actions a user takes that, together, constitute an attempt to satisfy a specific information need.

This definition, though, is really not deterministically measurable. There is no meaningful way (unless you can read the user’s mind) to know when they are “done”.

One possibility is to equate a search session to a visit – I find a good definition for this on Wikipedia in the Web analytics article:

A visit or session is defined as a series of page requests or, in the case of tags, image requests from the same uniquely identified client.

In the current solution I am working with, however, we have defined a search session to be a series of actions taken in sequence where the user does not change their search term. The user might navigate through a series of pages of results, reorder them, apply multiple filters, click on one or more results, etc., but, none of these count as another search session.

The rationale for this is that, based on anecdotal discussions with users, users tend to think of an effort using a single search term as a notional “search”. If the user fails with that term, they try another, but that is a different “search”.

Obviously, this is not truly accurate in all situations – if we could meaningfully detect (at scale, meaning across all of our activity) when changing the search term is really a restatement of the same information need vs. a completely different information need, we could do something more accurate, but we are not there, yet.

First Click

The last basic measure I propose is the first click.

A first click is counted the first time a user clicks on a result within a search session. If a user clicks on multiple things within a search session, they are all still counted as clicks, but not as first clicks.

If the user starts a new search session (which, in the current solution I work with, means they have changed their search term), then, if they click on some result, that is another first click.

Conclusion and what’s next

That is the set of basic measures that I think could be useful to establish as a standard.

Next steps – I hope to engage with others working in this domain to refine these and tighten them up (especially a search session). I hope to make some contacts through the Enterprise Search Engine Professionals group on LinkedIn and perhaps other communities for this. If you are interested, please let me know!

In my next post, I will be sharing definitions of some important metrics derived from the basic measures above that I use and provide some examples of each.

Criteria for Standard Measures of Enterprise Search

Sunday, January 31st, 2021

In my last post, I wondered about the lack of meaningful standards for evaluating enterprise search implementations.

I did get some excellent comments on the post and also some very useful commentary from a LinkedIn discussion about this topic – I would recommend you read through that discussion. Udo Kruschwitz and Charlie Hull both provided links to some very good resources.

In this post, I thought I would describe what I think to be some important attributes of any standard measures that could be adopted. Here I will be addressing the specific actions to measure – in a subsequent post I will write about how these can be used to actually evaluate a solution.

Measurable

To state the obvious, we need to have metrics that are measurable and objective. Ideally, metrics that directly reflect user interaction with the search solution.

Measures that depend on subjective evaluation or get feedback from users through means other than their direct use of the tool can be very useful but introduce problems in terms of interpretation differences and sustainability.

For example, a feedback function built into the interface (“Are these results useful?” or even a more specific, “Is this specific result useful for you here?”) can provide excellent insight but are used so little that the data is not useful overall.

Surveys of users inevitably fall into the problem of faulty or biased memory – in my experience, users have such a negative perception of enterprise search that individual negative experiences will overwhelm positive experiences with the search when you ask them to recall and assess their experience a day or week after their usage.

Common / Useful to compare implementations

Another important consideration is that a standard for evaluating enterprise search should include aspects of search that are common across the broad variety of solutions you might see.

In addition, they should lend themselves to comparing different solutions in a useful way.

Some implementations might be web-based (in my experience, this is by far the most common way to make enterprise search available). Some might be based on a desktop application or mobile app. Some implementations might depend only on users enterprise search terms to start a search session; some might only support searching based on search terms (no filtering or refining at all). Some implementations might provide a “search as you type” (showing results immediately based on part of what the user has entered). Many variations to consider here.

I would want to have measures that allow me to compare one solution to another – “Is this one better than that one?” “Are there specific user needs where this solution is better than that one?”

Likely to be insightful

Another obvious aspect is that we want to include measures that are likely to be useful.

Useful in what way, though?

My first thought is that it must measure if the solution is useful for the users – does it meet the users’ needs? (With search, I would simplify this to “does it provide the information the user needs efficiently?” but there are likely a lot of other ways to define “useful” even within a search experience.

Operationalizable

I would want all measures I use to be consistently available (no need to “take a measurement” at a given time) and also to not depend on someone actively having to “take a measurement”.

As mentioned above, measures that directly reflect what happens in the user experience are what I would be looking for. In this case, I would add in that the measures should be taken directly from the user experience – data captured into a search log file somewhere or captured via some other means.

This provides a data set that can be reviewed and used at basically any time and which (other than maintaining the system capturing the measurements) don’t require any effort to capture and maintain – the users use the search solution and their activities are captured.

Usable for overall and when broken down by dimensions

Finally, I would expect that measures would support analysis at broad scales and also should support the ability to drill in to details and use the same measures?

Examples of “broad scale” applicability: How good is this search solution overall? How good is my search solution in comparison to the overall industry average? How good are search solutions supporting the needs of users in the XYZ industry? How good are search solutions at supporting “known item” searching in comparison with “exploratory searching”?

Examples of drilling in: Within my user base, how successful are my users by department? How useful is the search solution in different topic areas of content? How good are results for individual, specific search criteria?

Others?

I’m sure I am missing a lot of potential criteria here – What would you add? Remove? Edit?

Evaluating enterprise search – standards?

Monday, January 18th, 2021

Over the past several years of working very closely with the enterprise search solution at Deloitte, I have tried to look “outside” as best as I can in order to understand what others in the industry are doing to evaluate their solutions in order to understand where ours ‘fits’.

I’ve attended a number of conferences and webcasts and read papers (many, I’ll admit, that are highlighted by Martin White on Twitter. I can’t recommend a follow of Martin enough!)

One thing I have never found is any common way to evaluate or talk about enterprise search solutions. I have seen several people (including Martin) comment on the relatively little research on enterprise search (as opposed to internet search, which has a lot of research behind it), and I am sure a significant reason for that is that there is no common way to evaluate the solutions.

If we could compare in a systematic way, we could start to understand how to do things like:

  • Identify common use cases that are visible in user behavior (via metrics)
  • Compare how ‘good’ different solutions are at meeting the core need (an employee needs to access some resource to do their job)
  • Compare different industries approaches to information seeking (again, as identified by user behavior via metrics) – for example, do users search differently in industrial companies vs. professional services companies vs. research companies?

Why do we not have a common set of definitions?

One possibility is certainly that I have still not read up enough on the topic – perhaps there is a common set of definitions – if so, feel free to share.

Another possibility is that this is a result of dependency on the metrics that are implemented within the search solutions enterprises are using. I have found that these are useful but they don’t come with a lot of detail or clarity of definition. And, more specifically, they don’t seem common across products. That said, I have relatively limited exposure to multiple search solutions – Again, I would be interested in insights from those who have (perhaps any consultants working in this space?)

And, one more possible driver behind a lack of commonality is the proprietary nature of most implementations. I try to speak externally as frequently as I can, but I am always hesitant (and have been coached) to not be too detailed on the implementation.

I do plan to put up a small series here, though, with some of the more elemental components of our metrics implementation for comparison with anyone who cares to share.

More soon!

Enterprise Search and Third-Party Applications

Tuesday, October 28th, 2008

Or, in other words, “How do you apply the application standards to improve findability to applications built by third-party providers who do not follow your standards?”

I’ve previously written about the standards I’ve put together for (web-based) applications that help ensure good findability for content / data within that application. These standards are generally relatively easy to apply to custom applications (though it can still be challenging to get involved with the design and development of those applications at the right time to keep the time investment minimal, as I’ve also previously written about).

However, it can be particularly challenging to apply these standards to third-party applications – For example, your CRM application, your learning management system, or your HR system, etc. Applying the existing standards could take a couple of different forms:

  1. Ideally, when your organization goes through the selection process for such an application, your application standards are explicitly included in the selection criteria and used to ensure you select a solution that will conform to your standards
  2. More commonly, you will identify compliance to the standards (perhaps during selection but perhaps later during implementation) and you might need to implement some type of customization within the application to provide compliance.
  3. Hopefully, you identify compliance to the standards during selection or later, but you find you can not customize the application and you need a different solution.

The rest of this post will discuss a solution for option #3 above – how you can implement a different solution. Note that some search engines will provide pre-built functionality to enable search within many of the more common third party solutions – those are great and useful, but what I will present here is a solution that can be implemented independent of the search engine (as long as the search engine has a crawler-based indexing function) and which is relatively minimal in investment.

Solving the third-party application conundrum for Enterprise Search

So, you have a third party application and, for whatever reason, it does not adhere to your application standards for findability. Perhaps it fails the coverage principle and it’s not possible to adequate find the useful content without getting many, many useless items; or perhaps it’s the identity principle and, while you can find all of the desirable targets, they have redundant titles; or it might even be that the application fails the relevance principle and you can index the high value targets and they show up with good names in results but they do not show up as relevant for keywords which you would expect. Likely, it’s a combination of all three of these issues.

The core idea in this solution is that you will need a helper application that creates what I call “shadow pages” of the high value targets you want to include in your enterprise search.

Note: I adopted the use of the term “shadow page” based on some informal discussions with co-workers on this topic – I am aware that others use this term in similar ways (though I don’t think it means the exact same thing) and also am aware that some search engines address what they call shadow domains and discourage their inclusion in their search results. If there is a preferred term for the idea described here – please let me know!

What is a shadow page? For my purposes here, I define a shadow page as:

  • A page which uniquely corresponds to a single desirable search target;
  • A page that has a distinct, unique URL;
  • A page that has a <title> and description that reflects the search target of which it is a shadow, and that title is distinct and provides a searcher who sees it in a search results page with insight about what the item is;
  • A page that has good metadata (keywords or other fields) that describe the target using terminology a searcher would use;
  • A page which contains text (likely hidden) that also reflects all of the above as well to enhance relevance for the words in the title, keywords, etc.;
  • A page which, when accessed, will automatically redirect a user to the page of which the page is a shadow.

To make this solution work, there are a couple of minimal assumptions of the application. A caveat: I recognize that, while I consider these as relatively simple assumptions, it is very likely that some applications will still not be able to meet these and so not be able to be exposed via your enterprise search with this type of solution.

  1. Each desirable search target must be addressable by a unique URL;
  2. It should be possible to define a query which will give you a list of the desirable targets in the application; this query could be an SQL query run against a database or possible a web services method call that returns a result in XML (or probably other formats but these are the most common in my experience);
  3. Given the identity (say, a primary key if you’re using a SQL database of some type) of a desirable search target, you must be able to also query the application for additional information about the search target.

Building a Shadow Page

Given the description of a shadow page and the assumptions about what is necessary to support it, it is probably obvious how they are used and how they are constructed, but here’s a description:

First – you would use the query that gives you a list of targets (item #2 from the assumptions) from your source application to generate an index page which you can give your indexer as a starting point.  This index page would have one link on it for each desirable target’s shadow page.  This index page would also have “robots” <meta> tags of “noindex,follow” to ensure that the index page itself is not included as a potential target.

Second – The shadow page for each target (which the crawler reaches thanks to the index page) is dynamically built from the query of the application given the identity of the desirable search target (item #3 from the assumptions).  The business rules defining how the desirable target should behave in search help define the necessary query, but the query would need to contain at minimum some of the following data: the name of the target, a description or summary of the target, some keywords that describe the target, a value which will help define the true URL of the actual target (per assumption #1, there must be a way to directly address each target).

The shadow page would be built something like the following:

  • The <title> tag would be the name of the target from the query (perhaps plus an application name to provide context)
  • The “description” <meta> tag would be the description or summary of the target from the query, perhaps plus a few static keywords that help ensure the presence of additional insight about the target.   For example, if the target represents a learning activity, the additional static text might indicate that.
  • The “keywords” <meta> tag would include the keywords from the query, plus some static keywords to ensure good coverage.  To follow the previous example, it might be appropriate to include words like “learning”, “training”, “class”, etc. in a target that is a learning activity to ensure that, if the keywords for the specific target do not include those words, searchers can still find the shadow page target in search.
  • The <body> of the page can be built to include all of the above text – from my experience, wrapping the body in a CSS style that visually hides the text keeps the text from actually appearing in a browser.
  • Lastly, the shadow page has a bit of JavaScript in it that redirects a browser to the actual target – this is why you need to have the target addressable via a URL and also that the query needs to provide the information necessary to create that URL.  Most engines (I know of none) will not be able to execute the JavaScript, so will not know that the page is really a redirect to the desired target.

The overall effect of this is that the search engine will index the shadow page, which has been constructed to ensure good adherence to the principles of enterprise search, and to a searcher, it will behave like a good search target but when the user clicks on it from a search result, the user ends up looking at the actual desired target.  The only clue the user might have is that the URL of the target in the search results is not what they end up looking at in their browser’s address bar.

The following provides a simple example of the source (in HTML – sorry for those who might not be able to read it) for a shadow page (the parts that change from page to page are in bold):

<html>
<head>
<TITLE>title of target</TITLE>
<meta name="robots" content="index, nofollow">
<meta name="keywords" content="keywords for target">
<meta name="description" content="description of target">
<script type="text/javascript">
document.location.href="URL of actual target";
</script>
</head>
<body>
<div style="display:none;">
<h1>title of target</h1>
description of target and keywords of target
</div>
</body>
</html>

Advantages of this Solution

A few things that are immediately obvious advantages of this approach:

  1. First and foremost, with this approach, you can provide searchers with the ability to find content which otherwise would be locked away and not available via your enterprise search!
  2. You can easily control the targets that are available via your enterprise search within the application (potentially much easier than trying to figure out the right combination of robots tags or inclusion / exclusion settings for your indexer).
  3. You can very tightly control how a target looks to the search engine (including integration with your taxonomy to provide elaborated keywords, synonyms, etc)

Problems with this Solution

There are also a number of issues that I need to highlight with this approach – unfortunately, it’s not perfect!

  1. The most obvious issue is that this depends on the ability to query for a set of targets against a database or web service of some sort.
    1. Most applications will be technically able to support this, but in many organizations, this could present too great a risk from a data security perspective (the judicious use of database views and proper management of read rights on the database should solve this, however!)
    2. This potentially creates too high a level of dependence between your search solution and the inner workings of the application – an upgrade of the application could change the data schema enough to break this approach.  Again, I think that the use of database views can solve this (by abstracting away the details of the implementation into a single view which can be changed as necessary through any upgrade).
  2. Some applications may simply not offer a “deep linking” ability into high value content – there is no way to uniquely address a content item without the context of the application.  This solution can not be applied to such applications.  (Though my opinion is that such applications are poorly designed, but that’s another matter entirely!)
  3. This solution depends on JavaScript to forward the user from the shadow page to the actual target.  If your user population has a large percentage of people who do not use JavaScript, this solution fails them utterly.
  4. This solution depends on your search engine not following the JavaScript or somehow otherwise determining that the shadow page is a very low quality target (perhaps by examining the styles on the text and determining the text is not visible).  If you have a search engine that is this smart, hopefully you have a way to configure it to ignore this for at least some areas or page types.
  5. Another major issue is that this solution largely circumvents a search engine’s built in ability to do item-by-item security as the target to the search engine is the shadow page.  I think the key here is to not use this solution for content that requires this level of security.

Conclusion

There you have it – a solution to the exposure of your high value targets from your enterprise applications that is independent of your search engine and can provide you (the search administrator) with a good level of control over how content appears to your search engine, while ensuring that what is included highly adheres to my principles of enterprise search.

People Search and Enterprise Search, Part 3 – The Fourth Generation

Monday, October 20th, 2008

So we get to the exciting conclusion of my essays on the inclusion of employees in enterprise search. If you’ve read this far, you know how I have characters the first and second generation solutions and also provided a description of a third generation solution (which included some details on how we implemented it).

Here I will describe what I think of as a fourth generation solution to people finding within the enterprise. As I mentioned in the description of the third generation solution, one major omission still at this point is that the only types of searches with which you can find people is through administrative information – things like their name, address, phone number, user ID, email, etc.

This is useful when you have an idea of the person you’re looking for or at least the organization in which they might work. What do you do when you don’t know the person and may not even know the organization in which they work? You might know the particular skills or competencies they have but that may be it. This problem is particularly problematic in larger organizations or organizations that are physically very distributed.

The core idea with this type of solution is to provide the ability to find and work with people based on aspects beyond the administrative – the skills of the people, their interests, perhaps the network of people with which they interact, and more. While this might be a simplification, I think of this as expertise location, though that, perhaps, most cleanly fits into the first use case described below.

Some common use cases for this type of capability include:

  • Peer-to-peer connections – an employee is trying to solve a particular problem and they suspect someone in the company may have some skills that would enable them to solve the problem more quickly. Searching using those skills as keywords would enable them to directly contact relevant employees.
  • Resource planning – a consulting organization needs to staff a particular project and needs to find specific people with a particular skill set.
  • Skill assessment – an organization needs to be able to ascertain the overall competency of their employees in particular skill sets to identify potential training programs to make available.

This capability is something that has often been discussed and requested at my current employer, but which no one has really been willing to sponsor. That being said, I know there are several vendors with solutions in this space, including (at least – please share if you know of others):

  • Connectbeam – A company I first found out about at KM World 2007. They had some interesting technology on display that combines expertise location with the ability to visualize and explore social networks based on that expertise. Their product could digest content from a number of systems to automatically discern expertise.
  • ActiveNet – A product from Tacit Software, which (at a high level) is similar to Connectbeam. An interesting twist to this product is that it leaves the individuals whose expertise are managed in the system in control of how visible they are to others. In the discussions I’ve had with this company about the product, I’ve always had the impression that, in part, this provides a kind of virtual mailing list functionality where you can contact others (those with the necessary expertise) by sending an email without knowing who it’s going to. Those who receive it can either act on it or not and, as the sender, you only know who replies.
  • Another product about which I only know a bit is from a company named Trampoline Systems. I heard about them as I was doing some research on how to tune a prototype system of my own and understand that their Sonar platform provides similar functionality.
  • [Edit: Added this on 03 November, 2008] I have also found that Recommind provides expertise location functionality – you can read more about it here.
  • [Edit: Added this on 03 November, 2008] I also understand that the Inquira search product provides expertise location, though it’s not entirely clear to me from what I can find about this tool how it does this.

A common aspect of these is that they attempt to (and perhaps succeed) in automating the process of expertise discovery. I’ve seen systems where an employee has to maintain their own skill set and the problem with these is that the business process to maintain the data does not seem to really embed itself into a company – inevitably, the data gets out of date and is ill-maintained and so the system does not work.

I can not vouch for the accuracy of these systems but I firmly believe that if people search in the enterprise is going to meet the promise of enabling people to find each other and connect based on of-the-moment needs (skills, interests, areas of work, etc), it will be based on this type of capability – automatically discovering those aspects of a worker based on their work products, their project teams, their work assignments, etc.

I imagine within the not too distant future, as we see more merger of the “web 2.0” functionality into the enterprise this type of capability will become expected and welcome – it will be exciting to see how people will work together then.

This brings to a close my discussion of the various types of people search within the enterprise. I hope you’ve found this of interest. Please feel free to let me know if you think I have any omissions or misstatements in here – I’m happy to correct and/or fill in.

I plan another few posts that discuss a proof of concept I have put together based around the ideas of this fourth generation solution – look for those soon!

People Search and Enterprise Search, Part 2 – A third generation solution

Wednesday, October 15th, 2008

In my last post, I wrote about what I termed the first generation and second generation solution to people search in enterprise. This time, I will describe what I call a “third generation” solution to the problem that will integration people search with your enterprise search solution.

This is the stage of people search in use within my current employer’s enterprise.

What is the third generation?

What I refer to as a third generation solution for people search is one where an employee’s profile (their directory entry, i.e., the set of information about a particular employee) becomes a viable and useful target within your enterprise search solution. That is, when a user performs a search using the pervasive “search box” (you do have one, right?), they should be able to expect to find their fellow workers in the results (obviously, depending on the particular terms used to do the search) along with any content that matches that.

You remove the need for a searcher to know they need to look in another place (another application, i.e., the company’s yellow pages) and, instead, reinforce the primacy of that single search experience that brings everything together that a worker needs to do their job.

You also offer the full power of your enterprise search engine:

  • Full text search – no need to specifically search within a field, though most engines will offer a way to support that as well if you want to ffer that as an option;
  • The power of the search engine to work on multi-word searches to boost relevancy – so a search on just a last name might include a worker’s profile in the search results but one that includes both a first and last name (or user ID or location or other keywords that might appear in the worker’s profile) likely ensures that the person shows in the first page of results amidst other content that match;
  • The power of synonyms – so you can define synonyms for names in your engine and get matches for “Rob Smith” when a user searches on “Robert Smith” or “Bob Smith”;
  • Spelling corrections – Your engine likely has this functionality, so it can automatically offer up corrections if someone misspells a name, even.

Below, you will find a discussion of the implementation process we used and the problems we encountered. It might be of use to you if you attempt this type of thing.

Before getting to that, though, I would like to discuss what I believe to be remaining issue with a third generation solution in order to set up my follow-up post on this topic, which will describe additional ideas to solving the “people finder” problem within an enterprise.

The primary issue with the current solution we have (or any similar solution based strictly on information from a corporate directory) is that the profile of a worker consists only of administrative information. That is, you can find someone based on their name, title, department, address, email, etc., etc., etc., but you can not do anything useful to find someone based on much more useful attributes – what they actually do, what their skills or competencies are or what their interests might be. More on this topic in my next post!

The implementation of our third generation solution (read on for the gory details)

Read on from here for some insights on the challenges we faced in our implementation of this solution. It gets pretty detailed from here on out, so you’ve been warned!

(more…)

What is Enterprise Search?

Thursday, October 9th, 2008

Having written previously about my own principles of enterprise search and then some ideas on how to select a search engine, I thought it might be time to back up a bit and write about what I think of as “enterprise search”. Perhaps a bit basic or unnecessary but it gives some context to future posts.

The Enterprise in Enterprise Search

For me, the factors of a search solution that make it an enterprise solution include the following:

The user interface to access the solution is available to all employees of the company.

This has the following implications:

  • Given today’s technologies, this probably means that it’s a web-based interface to access the search.
    • More generally, the interface needs to be easily made available across the enterprise. In any somewhat-large organization, that means something either available online or easily installed or accessed from a user’s workspace.
  • I would also suggest that the search interface should be easily accessible from an employee’s standard workspace or a common starting point for employees.
    • One easy way to achieve this is to make access to an enterprise search solution part of the general intranet experience – especially on an intranet that shares a standard look-and-feel (and so, hopefully, a standard template). This is the ubiquitous “search box”.
    • Alternately, if users commonly use a specific application (say a CRM application or a collaboration tool), integrating the enterprise search into that is a better solution.
    • Lastly, it might be necessary to make access to the search solution “many-headed”. Meaning, it might be best to make it available through a number of means, including through a standard intranet search, a specialized client-based application and embedded in other, user-specific tools.
  • Given the likely broad range of users who will use it, the search interface should be subject to very thorough usability design and testing.
  • Adopting some of the standard conventions of a search experience are a good idea.

The content available through the solution covers all (relevant) content available to employees

This has the following implications:

  • If your enterprise has a significant volume of web content, your enterprise search should index all of those web pages – either via a web crawling approach or via indexing the file system containing the files (if it’s all static).
  • If your enterprise has a significant volume of content (data) in enterprise applications (CRM solution, HR system, etc.), you should have a strategy to determine which (if any) of the content from those systems would be included, how it will be included and how it will be presented in search results (potentially combined with content from many other systems in the same results page)
  • If your enterprise has custom web applications (and what organization does not), you should expect to provide a set of standards for design and development of web applications to ensure good findability from them and also expect to have to monitor compliance with those.
  • If your enterprise has significant content in collaboration tools (and who doesn’t – at least email!), you should have a strategy for including or not including that content. This could be very broad-ranging – email, SharePoint (and similar applications from companies like Interwoven, Open Text, Vignette, Novell, etc.), shared file systems, IM logs, and so on. At the very least, you need to consider the cost and value of including these types of content.
  • If you have content repositories available to employees (a document management system (or systems!) or a records management system), again, you should consider the cost and value of including content from these in your enterprise search.
  • While it is very useful to have a separate search for finding employees in a corporate directory, I believe that an enterprise search solution should include employees as a distinct “content type” and include them in standard search results page as well when relevant (e.g., searching on employee names, etc)
  • Another major question regarding the content of your enterprise search is security. If you include all of that content in your search, how will you manage the security of the items? The two major options are early binding (building ACLs into the search) or late binding (checking security at search time). If you are not familiar with these, I would recommend you do a bit of internet searching on the topics as it’s very important to your solution. I’ve found some interesting articles on this topic.
    • In my mind, it’s also feasible to “punt” on security in a sense and work to ensure that your enterprise search solution includes everything that is generally accessible to your employee population but does not include anything with specific access control on it.
    • If you can achieve the effect of getting a user “close to” the content (ensuring some level of “information scent” shows up) but leaving it to the user to make the final step (through any application-specific access control) seems to work well.

The Search in Enterprise Search

The other half of your enterprise search solution will be the search engine itself. There are plenty (many!) options available with a variety of strengths and weaknesses. I think if you plan to implement a truly enterprise search, the above list of content-based considerations should get you thinking of all of the places where you may have content “hiding” in your organization.

From that list, you should have a good sense of the volume of content and the complexity of sources your search will need to deal with.

Combining that with a careful requirements definition process and evaluation of alternatives should lead to a successful selection of a tool.

Once you have a tool, you “just” need to apply the proper amount of elbow grease to get it to index all of the content you wish and present it in a sensible way to your users! No big deal, right?