Lee Romero

On Content, Collaboration and Findability

Language change over time in your search log

Monday, October 10th, 2011

This is a second post in a series I have planned about the language found throughout your search log – all the way into the “long tail” and how it might or might not be feasible to understand it all.

My previous post, “80-20: The lie in your search log?“, highlighted how the slope of “short head” of your search terms may not be as steep as anecdotes would say.  That is, there can be a lot less commonality within a particular time range among even the most common terms in your search log than you might expect.

After writing that post, I began to wonder about the overall re-use of terms over periods of time.

In other words:

Even while commonality of re-using terms within a month is relatively low, how much commonality do we see in our users’ language (i.e., search terms) from month to month?

To answer this, I needed to take the entire set of terms for a month and compare them with the entire set from the next month and determine the overlap and then compare the second month’s set of terms to a third month’s, and so on.  Logically not a hard problem but quite a challenge in practice due to the volume of data I was manipulating (large only in the face of the tools I have to manipulate it).

So I pulled together every single term used over a period of about 18 months and broke them into the set used for each of those months and performed the comparison.

Before getting into the details, a few details to share for context about the search solution I’m writing about here:

  • The average number of searches performed each month was almost 123,000.
  • The average number of distinct terms during this period was just under 53,000.
  • This results in an average of about 2.3 search for each distinct term

My expectation was that comparing the entire set of terms from one month to the next would show a relatively high percentage of overlap.  What I found was not what I expected.

If you look at the unique terms and their overlap, surprisingly, the average overlap between months was a shockingly low 13.2%.  In other words, over 86% of the terms in any given month were not used at all in the

Month to Month Re-Use of Search Terms

previous month.

If you look at the total searches performed and the percent of searches performed with terms from the prior month, this goes up to an average of 36.2% – reflecting that the terms that are re-used in a subsequent month among the most common terms overall.

Month to Month Re-Use of Search Terms

As you can see, the amount of commonality from month-to-month among the terms used is very low.

What can you draw from this observation?

In a brief discussion about this with noted search analytics expert Lou Rosenfeld, his reaction was that this represented a significant amount of change in the information needs of the users of the system – significant enough to be surprising.

Another conclusion I draw from this is that it provides another reason why it is very hard to meaningfully improve search across the language of your users.  Based on my previous post on the flatness of the curve of term use within a month, we know that it we need to look at a pretty significant percentage of distinct terms each month to account for a decent percentage of all searches – 12% of distinct terms to account for only 50% of searches.  In our search solution, that 12% doesn’t seem that large until you realize it is still represents about 6,000 distinct terms.

Coupling that with the observation from the analysis here means that even if you review those terms for a given month, you will likely need to review a significant percentage of brand new terms the next month, and so on.  Not an easy task.

Having established just how challenging this can be, my next few posts will provide some ideas for grappling with the challenges.

In the meantime, if you have any insight on similar statistics from your solution (or statistics about the shape of the search log curve I previously wrote above), please feel free to share here, on the SearchCoP on Yahoo! groups or on the Enterprise Search Engine Professionals group on LinkedIn – I would very much like to compare numbers to see if we can identify meaningful generalizations from different solution.

80-20: The lie in your search log?

Saturday, November 13th, 2010

Recently, I have been trying to better understand the language in use by our users in the search solution we use, and in order to do that, I have been trying to determine what tools and techniques one might use to do that. This is the first post in a planned series about this effort.

I have many goals in pursuing this.  The primary goal has been to be able to identify trends from the whole set of language in use by users (and not just the short head).  This goals supports the underlying business desire of identifying content gaps or (more generally) where the variety of content available in certain categories does not match with the variety expected by users (i.e., how do we know when we need to target the creation and publication of specific content?)

Many approaches to this do focus on the short head – typically the top N terms, where N might be 50 or 100 or even 500 (some number that’s manageable).  I am interested in identifying ways to understand the language through the whole long tail as well.

As I have dug into this, I realized an important aspect of this problem is to understand how much commonality there is to the language in use by users and also how much the language in use by users changes over time – and this question leads directly to the topic at hand here.

Search Term Usage

Chart 1

There is an anecdote I have heard many times about the short head of your search log that “80 percent of your searches are accounted for by the top 20% most commonly-used terms“.  I now question this and wonder what others have seen.

I have worked closely with several different search solutions in my career and the three I have worked most closely with (and have most detailed insight on) do not come even close to the above assertion.  Chart 1 shows the usage curve for one of these.  The X axis is the percent of distinct terms (ordered by use) and the Y axis shows the percent of all searches accounted for by all terms up to X.

From this chart, you can see that it takes approximately 55% of distinct terms to account for 80% of all searches – that is a lot of terms!

This curve shows the usage for one month – I wondered about how similar this would be for other months and found (for this particular search solution) that the curves for every month were basically the exact same!

Wondering if this was an anomaly, I looked at a second search solution I have close access to to wonder if it might show signs of the “80/20″ rule.  Chart 2 adds the curve for this second solution (it’s the blue curve – the higher of the two).

Chart 2

Chart 2

In this case, you will find that the curve is “higher” – it reaches 80% of searches at about 37% of distinct terms.  However, it is still pretty far from the “80/20″ rule!

After looking at this data in more detail, I have realized why I have always been troubled at the idea of paying close attention to only the so-called “short head” – doing so leaves out an incredible amount of data!

In trying to understand the details of why, even though neither is close to adhering to the “80/20″ rule, the usage curves are so different, I realize that there are some important distinctions between the two search solutions:

  1. The first solution is from a knowledge repository – a place where users primarily go in order to do research; the second is for a firm intranet – much more focused on news and HR type of information.
  2. The first solution provides “search as you type” functionality (showing a drop-down of actual search results as the user types), while the second provides auto-complete (showing a drop-down of possible terms to use).  The auto-complete may be encouraging users to adopt more commonality.

I’m not sure how (or really if) these factor into the shape of these curves.

In understanding this a bit better, I hypothesize two things:  1) the shape of this curve is stable over time for any given search solution, and 2) the shape of this curve tells you something important about how you can manage your search solution.  I am planning to dig more to answer hypothesis #1.

Questions for you:

  • Have you looked at term usage in your search solution?
  • Can you share your own usage charts like the above for your search solution and describe some important aspects of your solution?  Insight on more solutions might help answer my hypothesis #2.
  • Any ideas on what the shape of the curve might tell you?

I will be writing more on these search term usage curves in my next post as I dig more into the time-stability of these curves.

Embedding Knowledge Sharing in Performance Management

Tuesday, February 10th, 2009

In my last post, I wrote about a particular process for capturing “knowledge nuggets” from a community’s on-going discussions and toward the end of that write up, I described some ideas for the motivation for members to be involved in this knowledge capture process and how it might translate to an enterprise. All of the ideas I wrote about were pretty general and as I considered it, it occurred to me that another topic is – what are the kinds of specific objectives an employee could be given that would (hopefully) increase knowledge sharing in an enterprise? What can a manager (or, more generally, a company) do to give employees an incentive to share knowledge?

Instead of approaching this from the perspective of what motivates participants, I am going to write about some concrete ideas that can be used to measure how much knowledge sharing is going on in your organization. Ultimately, a company needs to build into its culture and values an expectation of knowledge sharing and management in order to have a long-lasting impact. I would think of the more tactical and concrete ideas here as a way to bootstrap an organization into the mindset of knowledge sharing.

A few caveats: First – Given that these are concrete and measurable, they can be “gamed” like anything else that can be measured. I’ve always thought measures like this need to be part of an overall discussion between a manager and an employee about what the employee is doing to share knowledge and not (necessarily) used as absolute truth.

Second – A knowledge sharing culture is much more than numbers – it’s a set of expectations that employees hold of themselves and others; it’s a set of norms that people follow. That being said, I do believe that it is possible to use some aspects of concrete numbers to understand impacts of knowledge management initiatives and to understand how much the expectations and norms are “taking hold” in the culture of your organization. Said another way – measurement is not the goal but if you can not measure something, how do you know its value?

Third – I, again, need to reference the excellent guide, “How to use KPIs in Knowledge Management” by Patrick Lambe. He provides a very exhaustive list of things to measure, but his guide is primarily written as ways to measure the KM program. Here I am trying to personalize it down to an individual employee and setting that employee’s objectives related to knowledge sharing.

In the rest of this post, I’ll make the assumption that your organization has a performance management program and that that program includes the definition for employees of objectives they need to complete during a specific time period. The ideas below are applicable in that context.

  • Community membership – Assuming your community program has a way to track community membership, being a member of relative communities can be a simple objective to accomplish.
  • Community activity – Assuming you have tools to track activity by members of communities, this can give you a way to set objectives related to being active within a community (which I think is much more valuable than simply being a member). It’s hard to set specific objectives for this type of thing, but the objective could simply be – “Be an active member of relevant communities”. Some examples
    • If your communities use mailing lists, you can measure posts to community mailing lists.
    • If your communities use an collaboration tool, such as a wiki or blog or perhaps shared spaces, measure contributions to those tools.
    • If your communities manage community-based projects, measure involvement in those projects – tasks,deliverables, etc.
    • Assuming your communities hold events (in-person meetings, webcasts, etc.), measure participation in those events.
  • Contribution in a corporate knowledge base – An obvious suggestion. Assuming your organization has a knowledge base (perhaps multiples?), you can set expectations for your employee’s contributions to these.
    • Measure contributions to a document management system. More specifically, measure usage of contributions as well.
    • If your organization provides product support of any sort, measure contributions to your product support knowledge base
    • If you have a corporate wiki, measure contributions to the corporate wiki
    • If you have a corporate blog, measure posts and comments on the corporate blog
    • Measure publications to the corporate intranet
    • In your services organization (if you have one), measure contributions of deliverables to your clients. Especially ones of high re-use value.
    • Measure relevance or currency of previously contributed content – Does an employee keep their contributions up to date?
  • A much different aspect of a knowledge sharing culture is to also capture a when employees look for knowledge contributed by others – that is, the focus can not simply be on how much output an employee generates but also on how effective an employee is in re-using the knowledge of others.
    • This one is harder for me to get my head around because, as hard as it can be to assign any credible value to the measurements listed above, it’s harder to measure the value someone gets out of received knowledge.
    • Some ideas…
    • Include a specific objective related to receiving formalized training – while a KM program might focus on less formal ways to share knowledge, there’s nothing wrong with this simple idea.
    • If your knowledge management tools support it, measure usage by each employee of knowledge assets – do they download relevant documents? Read relevant wiki articles or blog posts?
    • Measure individual usage of search tools – at least get an indication of when an employee first looks for assets instead of re-inventing the wheel.

Not all of these will apply to all employees and some employees may not have any specific, measurable knowledge sharing objectives (though that seems hard to imagine regardless of the job). An organization should look at what they want to accomplish, what their tool set will support (or what they’re willing to enhance to get their tool set to support what they want) and then be specific with each employee. This is meant only as a set of ideas or suggestions to consider in making knowledge sharing an explicit, concrete and measurable activity for your employees.

Rolling Up Objectives

Given some concrete objectives to measure employees with, it seems relatively simply to roll those objectives up to management to measure (and set expectations for up front) knowledge sharing by a team of employees, not just individual employees. On the other hand, a forward-thinking organization will define group-level objectives which can be cascaded down to individual employees.

Given either of these approaches, a manager (or director, VP, etc.) may then have both an organizational level objective and their own individual objectives related to knowledge sharing.

Knowledge Sharing Index

Lastly – while I’ve never explored this, several years ago, a vice president at my company asked for a single index of knowledge sharing. I would make the analogy of some like a stock index – a mathematical combination of measuring different aspects of knowledge sharing within the company. A single number that somehow denotes how much knowledge sharing is going on.

I don’t seriously think this could be meaningful but it’s an interesting idea to explore. Here are some definitions I’ll use to do so:

  • You would need to identify your set of knowledge sharing activities to measure – Call these A1, … , An. Note that these measurements do not need to really measure “activity”. Some might measure, say, the number of members in your communities at a particular time or the number of users of a particular knowledge base during a time period.
  • Define how you measure knowledge sharing for A1, … , An – for a given time t, the measurement of activity Ai is Mt,i
  • You then need to define a starting point for measurement – perhaps a specific date (or week or month or whatever is appropriate) whose level of activity represents the baseline for measurement. Call these B1, …, Bn – basically, Bi is M0,i
  • Assuming you have multiple types of activity to measure, you need to assign a weight to each type of activity that is measured – how much impact does change in each type of activity have on the overall measurement? Call these W1, …. Wn.

Given the above, you could imagine the “knowledge sharing index” at any moment in time could be computed as (for – I don’t know how to make this look like a “real” formula!):

Knowledge index at time t = Sum (i=1…N) of Wi * ( Mt,i / Bi )

A specific example:

  1. Let’s say you have three sources of “knowledge sharing” – a corporate wiki, a mailing list server and a corporate knowledge base
  2. For the wiki, you’ll measure total edits every week, for the list server, you’ll measure total posts to all mailing lists on it and for the knowledge base, you’ll measure contributions and downloads (as two measures).
  3. In terms of weights, you want to give the mailing lists the least weight, the wiki an intermediate weight and the combined knowledge base the most weight. Let’s say the weights are 15 for the mailing lists, 25 for the wiki, 25 for the downloads from the knowledge base and 35 for contributions to the knowledge base. (So the weights total to 100!)
  4. Your baseline for future measurement is 200 edits in the wiki, 150 posts to the list server, 25 contributions to the knowledge base and downloads of 2,000 from the knowledge base
  5. At some week after the start, you take a measurement and find 180 wiki edits, 160 posts to the list server, 22 knowledge base contributions and 2200 downloads from the knowledge base.
  6. The knowledge sharing index for that week would be 95.8. This is “down” even though most measures are up (which simply reflects the relative importance of one factor, which is down).

If I were to actually try something like this, I would pick the values of Wi so that the baseline measurement (when t= 0) comes to a nice round value – 100 or something. You can then imagine reporting something like, “Well, knowledge sharing for this month is at 110!” Or, “Knowledge sharing for this month has fallen from 108 to 92″. If nothing else, I find it amusing to think so concretely in terms of “how much” knowledge sharing is going on in an organization.

There are some obvious complexities in this idea that I don’t have good answers for:

  1. How to manage a new means to measure activity becoming available? For example, your company implements a new collaboration solution. Do you add it in as a new factor with its weight and just have to know that at some point there’s a step function of change in the measure that doesn’t mean anything except for this new addition? Do you try to retroactively adjust weights of sources already included to keep the metrics “smooth”?
  2. How to handle retiring a source of activity? For example, you retire that aging (but maybe still used extensively) mailing list server. Same question as above, though perhaps simpler – you could just retroactively remove measurements from the now-retired source to keep a smooth picture.
  3. How to handle (or do you care to handle?) a growing or shrinking population of knowledge workers? Do you care if your metric goes up because you acquired a new company (for example) or do you need to normalize it to be independent of the number of workers involved?

In any event – I think this is an interesting, if academic, discussion and would be interested in others’ thoughts on either individual performance management or the idea of a knowledge sharing index.

Search Analytics – Search Results Usage

Monday, January 26th, 2009

In my previous two posts, I’ve written about some basic search analytics and then some more advanced analysis you can also apply. In this post, I’ll write about the types of analysis you can and should be doing on data captured about the usage of search results from your search solution. This is largely a topic that could be in the “advanced” analytics topic but for our search solution, it is not built into the search solution and has been implemented only in the last year through some custom work, so it feels different enough (to me) and also has enough details within it that I decided to break it out.

Background

When I first started working on our search solution and dug into the reports and data we had available about search behavior, I found we had things like:

  • Top searches per reporting period
  • Top indexes used and the top templates used
  • Searches per hour (or day) for the reporting period (primarily useful to know how much hardware your solution needs)
  • Breakdowns of searches by “type”: “successful” searches, “not found” searches, “error” searches, “redirected” searches, etc.
  • A breakdown of which page of results a user (allegedly) found the desired item

and much more. However, I was frustrated by this because it did not give me a very complete picture. We could see the searches people were using – at least the top searches – but we could not get any indication of “success” or what people found useful in search, even. The closest we got from the reports was the last item listed above, which in a typical report might look something like:

Search Results Pages

  • 95% of hits found on results page 1
  • 4% of hits found on results page 2
  • 1% of hits found on results page 3
  • 0% of hits found on results page 4
  • Users performed searches up to results page 21

However, all this really reflects is the percentage of each page number visited by a searcher – so 95% of users never go beyond page 1 and the engine assumes that means they found what they wanted there. That’s a very bad assumption, obviously.

A Solution to Capture Search Results Usage

I wanted to be able to understand what people were actually clicking on (if anything) when they performed a search! I ended up solving this with a very simple solution (simple once I thought of it). I believe this emulates what Google (and probably many other search engines) do. I built a simple servlet that takes a number of parameters, including a URL (encoded) and the various pieces of data about a search result target and stores an event in a database from those parameters and then forwards the user to the desired URL. Then the search results page was updated to provide the URL for that servlet in the search results instead of the direct URL to the target. That’s been in place for a while now and the data is extremely useful!

By way of explanation, the following are the data elements being captured for each “click” on a search result:

  • URL of the target
  • search criteria used for the search
  • Location of the result (which page of results, which result number)
  • The relevance of the result
  • The index that contained the result and whether it was in the ‘best bets’ section
  • The date / time of the click

This data provides for a lot of insight on behavior. You can guess what someone might be looking for based on understanding the searches they are performing but you can come a lot closer to understanding what they’re really looking for by understanding what they actually accessed. Of course, it’s important to remember that this does not really necessarily equate to the user finding what they are looking for, but may only indicate which result looks most attractive to them, so there is still some uncertainty in understand this.

While I ended up having to do some custom development to achieve this, some search engines will capture this type of data, so you might have access to all of this without any special effort on your part!

Also – I assume that it would be possible to capture a lot of this using a standard web analytics tool as well – I had several discussions with our web analytics vendor about this but had some resource constraints that kept it from getting implemented and also it seemed it would depend in part on the target of the click being instrumented in the right way (having JavaScript in it to capture the event). So any page that did not have that (say a web application whose template could not be modified) or any document (something like a PDF, etc) would likely not be captured correctly.

Understanding Search Usage

Given the type of data described above, here are some of the questions and actions you can take as a search analyst:

  • You know the most common searches being performed (reported by your search engine) – what are the most common searches for search result clicks?
    • If you do not end up with basically the same list, that would indicate a problem, for sure!
    • Action: Understanding any significant differences, though, would be very useful – perhaps there is key content missing in your search (so users don’t have anything useful to click on).
  • For common searches (really, for whatever subset you want to examine but I’m assuming you have a limited amount of time so I would generally recommend focusing on the most common searches), what are the most commonly clicked on results (by URL)?
    • Do these match your expectations? Are there URLs you would expect to see but don’t?
    • Action: As mentioned in the basic analytics article, you can identify items that perhaps are not showing properly in search that should and work on getting them included (or improved if your content is having an identity issue).
  • Independent of the search terms used, what are the most commonly accessed URLs from search?
    • For each of the most commonly used URLs, what keywords do users use to find them?
    • Does the most common URL clicked on change over time? Seasonally? As mentioned in the basic analytics article, you can use this insight to more proactively help users through updates to your navigation.
    • Action: Items that are common targets from search might present navigation challenges for your users. Investigate that.
    • Action: Items that are common targets but which have a very broad spectrum of keywords that lead a user to it might indicate a landing page that could be split out into more refined targets. That being said, it is very possible that users prefer the common landing page and following the navigation from there instead of diving deeper into the site directly from search. Some usability testing would be appropriate for this type of change.
  • A very important metricWhat is the percentage of “fall outs” (my own term – is there a common one)? Meaning, what percentage of searches that are performed do not result in the user selecting any result? For me, this static provides one of the best pieces of insight you can automatically gather on the quality of results.
    • More specifically, measure the percentage fall out for specific searches and monitor that. Focus on the most common searches or searches that show up as common over longer durations of time.
    • Action: Searches that have high fall out would definitely indicate poor-performing searches and you should work to identify the content that should be showing and why it doesn’t. Is the content missing? Does it show poorly?
  • What percentage of results come from best bets?
    • Looking at this both as an overall average and also for individual searches or URLs can be useful to track over time.
    • Action: At the high level (overall average) a move down in this percentage over time would indicate that the Best Bets are likely not being maintained.
      • Look for items that are commonly clicked on that are not coming from Best Bets and consider if they should be added!
      • Are the keywords associated with the best bets items kept up to date?
    • Action: Review the best bets and confirm if there are items that should be added. Also, does your search results UI present the best bets in an obvious way?
  • What is the percentage of search results usage that comes from each page of results (how many people really click on an item on page 2, page 3, etc.)?
    • Are there search terms or search targets that show up most commonly not on page 1 of the results?
    • Action: If there are searches were the percentage of results clicked is higher on pages after page 1, you should review what is showing up on the first page. It would seem that the desired target is not showing up on the first page (at least at a higher rate than for other searches).
    • Action: If there are URLs where the percentage of times they are clicked on in pages beyond the first page of results is higher than for other URLs, look at those URLs – why are they not showing up higher in the results?
  • Depending on the structure of the URLs in use within your content, it might also be possible to do some aggregation across URLs to provide insight on search results usage across larger pieces of your site. For example, if you use paths in your URLs you could do aggregation on this data on patterns of the URLs – How many search results are to an item whose URL looks like “http://site.domain.com/path1/path2″.
    • Assuming you can do this with your data, you can then analyze common keywords used to access a whole area instead of focusing on specific URLs
    • If your site is dynamic (using query strings) it might be possible to do some aggregation based on the patterns in the query strings of the URLs instead to achieve the same results.
    • This type of analysis can actually be very useful to find cases where a user is “getting close” to a desired item but they’re not getting the most desirable target because the most desirable target does not show up well in search. (So a user might make their way to the benefits area but might not be directly accessing the particular PDF describing a particular benefit.)
      • Action: You can then identify items for improvement.
    • All of the above detailed questions about URLs can be asked about aggregations of URLs, so keep that in mind.

You can also combine data from this source with data from your web analytics solution to do some additional analysis. If you capture the search usage data in your web analytics tool (as I mention above should be possible), doing this type of analysis should be much easier, too!

  • For URLs commonly clicked on from search results, what percentage of their access is through search?
    • Action: If a page has a high percentage of its access via search, this identifies a navigation issue to address.
    • One case I have not yet worked out is a page that is very commonly accessed from search results (high compared to other results) but for which those accesses represent a low percentage of use of that page – do you care? What action (if any) might be driven from this? It seems like from the perspective of search, it’s important but there does not seem to be a navigational issue (users are getting to the target OK for the most part). Any thoughts?
  • Turning around the above, for commonly accessed pages (as reported by your web analytics tool), what percentage of their access comes via search? In my experience, it’s likely that the percentage via search would be low if the pages themselves are highly used already, but this is good to validate for those pages.
    • Action: As above, a high percentage of accesses via search would seem to indicate a navigation issue.
  • You can also use your web analytics package to get a sense of the “fall outs” mentioned above at a high level of detail – using the path functionality of your web analytics package, what percentage of accesses to your search results page have a “next page” where the user leaves the site? What percentage leads to a page that is known to not be a relevant target (in our data, I see a large percentage of users return to the home page, for example – it is possible the user clicked on a result that is the home page, but it seems unlikely).
    • However, you will likely not have any insight about what the searches were that led to this and not know what the variance is across different searches.

Summing Up

Here’s a wrap (for now) on the types of actionable metrics you might consider for your search program. I’ve covered some basic metrics that just about any search engine should be able to support; then some more complex metrics (requiring combining data from other sources or some kind of processing on the data used for the basic metrics) and in this post, I’ve covered some data and analysis that provides a more comprehensive picture of the overall flow of a user through your search solution.

There are a lot more interesting questions I’ve come up with in the time I’ve had access to the data described above and also with the data that I discussed in my previous two posts, but many of them seem a bit academic and I have not been able to identify possible actions to take based on the insights from them.

Please share your thoughts or, if you would, point me to any other resources you might know of in this area!

Search Analytics – Advanced Metrics

Friday, January 23rd, 2009

In my last post, I provided a description of some basic metrics you might want to look into using for your search solution (assuming you’re not already). In this post, I’ll describe a few more metrics that may take a bit more effort to pull together (depending on your search engine).

Combining Search Analytics and Web Analytics

First up – there is quite a lot of insight to be gained from combining your search analytics data with your web analytics data. It is even possible to capture almost all of your search analytics in your web analytics solution which makes this combination easier, though that can take work. For your external site, it’s also very likely that your web analytics solution will provide insight on the searches that lead people to your site.

A first useful piece of analysis you can perform is to review your top N searches, perform the same searches yourself and review the resulting top target’s usage as reported in your web analytics tool.

  • Are the top targets the most used content for that topic?
  • Assuming you can manipulate relevancy at an individual target level, you might bump up the relevancy for items that are commonly used but which show below other items in the search results (or you might at least review the titles and tags for the more-commonly-used items and see if they can be improved).
  • Are there targets you would expect to see for those top searches that your web analytics tool reports as highly utilized but which don’t even show in the search results for the searches? Perhaps you have a coverage issue and those targets are not even being indexed.
  • It might be possible to integrate data from your web analytics solution reflecting usage directly into your search to provide a boost in relevance for items in search that reflects usage.
  • [Update 26 Jan 2009] One item I forgot to include here originally is to use your web analytics tool to track the page someone is on when they perform a search (assuming you provide persistently available access to your search tool – say in a persistently available search box on your site). Knowing this can help tune your navigational experience. Pages that commonly lead users to use search would seem like pages that do not provide good access to the information users expect and they fall back to using search. (Of course, it might be that leading the user to search is part of the point of the page so keep that in mind.)
  • [Update 26 Jan 2009] Another metric to monitor – measure the ratio of searches performed each reporting period (week) to the number of visits for that same time period.  This will give you a sense of how much the search is used (in relation to navigation).  I find that the absolute number is not as useful as tracking this over time and that monitoring changes in this value can give you indicators of general issues with navigation (if the ratio goes up) or search (if the ratio goes down).  Does anyone know of any benchmarks in this area? I do not but am interested in understand if there’s a generally-accepted range for this that is judged “acceptable”.  In the case of our solution, when I first started tracking this, it was just under .2 and has seen a pretty steady increase over the years to a pretty steady value of about 0.33 now.

A second step would be to review your web analytics report for the most highly used content on your site. For the most highly utilized targets, determine what are the obvious searches that should expose those targets and then try those searches out and see where the highly used targets fall in the results.

  • Do they show as good results? If not, ensure that the targets are actually included in your search and review the content, titles and tags. You might need to also tweak synonyms to ensure good coverage.
  • You should also review the most highly used content as reported by your web analytics tool against your “best bets” (if you use that). Is the most popularly accessed content show up in best bets?

Another fruitful area to explore is to consider what people actually use from search results after they’ve done a search (do they click on the first item, second? what is the most common target for a given keyword? Etc.). I’ll post about this separately.

I’m sure there are other areas that could be explored here – please share if you have some ideas.

Categorizing your searches

When I first got involved in supporting a search solution, I spent some time understanding the reports I got from my search engine. We had our engine configured to provide reports on a weekly basis and the reports provided the top 100 searches for the week. All very interesting and as we started out, we tried to understand (given limited time to invest) how best to use the insight from just these 100 searches each week.

  • Should we review the results from each of those 100 searches and try to make sure they looked good? That seemed like a very time intensive process.
  • Should we define a cut off (say the top 20)? Should we define a cutoff in terms of usage (any search that was performed more than N times)?
  • What if one of these top searches was repeated? How often should we re-review those?
  • How to recognize when a new search has appeared that’s worth paying attention to?

We quickly realized that there was no really good, sustainable answer and this was compounded by the fact that the engine reported two searches as different searches if there was *any* difference between two searches (even something as simple as case difference, even though the engine itself does not consider case when doing a search – go figure).

In order to see the forest for the trees, we decided what would be desirable is to categorize the searches – associate individual searches with a larger grouping that allows us to focus at a higher level. The question was how best to do this?

Soon after trying to work out how to do this, I attended Enterprise Search Summit West 2007 and attended a session titled “Taxonomize Your Search Logs” by Marilyn Chartrand from Kaiser Permanente. She spoke about exactly this topic, and, more specifically, the value of doing this as a way to understand search behavior better, to be able to talk to stakeholders in ways that make more sense to them, and more.

Marilyn’s approach was to have a database (she showed it to me and I think it was actually in a taxonomy tool but I don’t recall the details – sorry!) where she maintained a mapping from individual search terms to the taxonomy values.

After that, I’ve started working on the same type of structure and have made good headway. Further, I’ve also managed to have a way to capture every single search (not just the top N) into a SQL database so that it’s possible to view the “long tail” and categorize that as well. I still don’t have a good automated solution to anything like auto-categorizing the terms but the level of re-use from one reporting period to the next is high enough that dumping in a new period’s data requires categorization of only part of the new data. [Updated 26 Jan 2009 to add the following] Part of the challenge is that you will likely want to apply many of the same textual conversions to your database of captured searches that are applied by your search engine – synonyms, stemming, lemmatization, etc. These conversions can help simplify the categorization of the captured searches.

Anyway – the types of questions this enables you to answer and why it can be useful include:

  • What are the most-used categories of content for your search users?
    • How does this correlate with usage (as reported in your web analytics solution) for that same category?
    • If they don’t correlate well, you may have a navigational issue to address (perhaps raising the prominence of a category that’s overly visible in navigation or lowering it).
    • Review the freshness of content in those categories and work with content owners to ensure that content is kept up to date. I’ve found it very useful to be able to talk with content owners in terms like “Did you know that searches for your content constitute 20% of all searches?” If nothing else, it helps them understand the value of their content and why they should care about how well it shows up in search results! Motivate them to keep it up to date!
  • Assuming you categorize your searches based on your taxonomy, this can also feed back into your taxonomy management process as well! Perhaps you can identify taxonomic terms that should be retired or collapsed or split using insights from predominance of use in search.
  • Within the categorization of search terms, can you correlate the words used to identify what are the most common “secondary” words in the searches. An example – GroupWise is a product made and sold by my employer. It is also a common search target. So a lot of searches will include the word groupwise in them (I use that as a way to pseudo-automatically categorizes searches with a category – by the presence of a single keyword). Most of those searches, though, include other words. What are the most common words (other than groupwise) among searches that are assigned to the GroupWise category?
    • This insight can help you tune your navigation – common secondary words represent content that a user should have access to when they are looking at a main page (assuming one exists) for that particular category. If the most common secondary word for GroupWise were documentation, say, providing direct access to product documentation would be appropriate.
    • You can also use that insight to feed back into your taxonomy (specifically, you might be able to find ways to identify new sub-terms in your taxonomy).

Analytics on the search terms / words

Another useful type of analysis you can perform on search data is to look at simple metrics of the searches. Louis Rosenfeld identified several of these – I’m including those here and a few additional thoughts.

  • How many words, on average, are in a search? What is the standard deviation? This insight can help you understand how complex the searches your users are performing. I don’t know what a benchmark is, but I find in our search solution, it averages just over 2 words / search. This indicates to me that the average search is very simple, so expectations are high on the search engine’s ability to take those 2 words and provide a good result.
    • You can also monitor this over time and try to understand if it changes much and, if so, analyze what has changed.
  • While not directly actionable, another good view of this data is to build a chart of the # of searches performed for each count of words. The chart below shows this for a long period of use on our engine. You can see that searches with more than 10 words are vanishingly small. After the jump from 1 word to 2 words, it’s almost a steady decline, though there are some anomalies in the data where certain longer lengths jump up from the previous count (for example, 25 word searches are more than twice as common as 24 word searches). The absolute numbers of these is very small, though, so I don’t think it indicates much about those particular lengths.
Chart of Searches per Word Count

Chart of Searches per Word Count

  • You can also look at the absolute length of the search terms (effectively, the number of characters). This is useful to review against your search UI (primarily, the ever-present search box you have on your site, right?). Your search box should be large enough to ensure that a high percentage (90+%) of searches will be visible in the box without scrolling.
    • I did this analysis and found that our search UI did exactly that.
    • I also generated a chart like the one above where the X axis was the length of the search and found some obvious anomalies in our search – you can see them in the chart below.
    • I tried to understand the unexpected spike in searches of length 3 and 4 compared to the more regular curve and found that it was caused by a high level of usage of (corporate-specific) acronyms in our search! This insight led me to realize that we needed to expand our synonyms in search to provide more coverage for those acronyms, which were commonly the acronyms for internal application names.
Chart of Search Length to number of searches

Chart of Search Length to number of searches

Network Analysis of Search Words

Another interesting view of your search data is hinted at by the discussion above of “secondary” search words – words that are used in conjunction with other words. I have not yet managed to complete this view (lack of time and, frankly, the volume of data is a bit daunting with the tools I’ve tried).

The idea is to parse your searches into their constituent words and then build a network between the words, where the each word is a node and the links between the words represent the strength of the connection between the words – where “strength” is the number of times those two words appear in the same searches.

Having this available as a visual tool to explore words in search seems like it would be valuable as a way to understand their relationships and could give good insight on the overall information needs of your searchers.

The cost (in myown time if nothing else) of taking the data and manipulating it into a format that could then be exposed in this, however, has been high enough to keep me from doing it without some more concrete ideas for what actionable steps I could take from the insight gained. I’m just not confident enough to think that this would expose anything much more than “the most common words used tend to be used together most commonly”.

Closing thoughts

I’m missing a lot of interesting additional types of analyses above – feel free to share your thoughts and ideas.

In my next post, I’ll explore in some more detail the insights to be gained from analyzing what people are using in search results (not just what people are searching for).

Search Analytics – Basic Metrics

Tuesday, January 20th, 2009

In my first few posts (about a year ago now), I covered what I call the three principles of enterprise search – coverage, identity, and relevance. I have posted on enterprise search topics a few times in the meantime and wanted to return to the topic with some thoughts to share on search analytics and provide some ideas for actionable metrics related to search.

I’m planning 3 posts in this series – this first one will cover some of what I think of as the “basic” metrics, a second post on some more advanced ideas and a third post focusing more on metrics related to the usage of search results (instead of just the searching behavior itself).

Before getting into the details, I also wanted to say that I’ve found a lot of inspiration from the writings and speaking of Louis Rosenfeld and also Avi Rappoport and strongly recommend you look into their writings. A specific webinar to share with you, provided by Louis, is “Site Search Analytics for a Better User Experience“, which Louis presented in a Search CoP webcast last spring. Good stuff!

Now onto some basic metrics I’ve found useful. Most of these are pretty obvious, but I guess it’s good to start at the start.

  • Total searches for a given time period – This is the most basic measure – how much is search even used? This can be useful to help you understand if people are using the search more or less over time.
    • In terms of actionable steps, if you pay attention to this metric over time, it can tell you, at a high level, whether users are finding navigation to be useful or not. Increasing search usage can point to the need to improve navigation – so perhaps might indicate the need for a better navigational taxonomy, so look at whether highly-sought content has clear navigation and labeling.
  • Total distinct search terms for a given time period – Of all of the searches you are measuring with the first metric, how many are unique combinations of search criteria (note: criteria may include both user-entered keywords and also something like categories or taxonomy values selected from pick lists if your search supports that)? If you take the ratio of total searches to distinct searches, you can determine the average number of times any one search term is used.
    • In terms of taking action on this, there is not much new to this metric compared to total searches, but the value I find is that it seems to be a bit more stable from period to period.
    • Monitoring the ratio over time is interesting (in my experience, ours tends to run about 1.87 searches / distinct search and variations seem small over time). Not sure what a benchmark should be. Anyone? Understanding and comparing to benchmarks probably would provide some more distinct tasks.
  • Total distinct words for a given time period and average words per search – take the previous metric and pull apart individual search terms (or user-selected taxonomic values) and get down to the individual words.
    • This view of the data helps you understand the variety of words in use throughout search. Often, I find that understanding the most common individual words is more useful than the top searches.
    • In terms of action, again, not much new here other than comparing to the total searches to find ways to understanding search usage.
    • I’m also interested in whatever benchmarks anyone else knows of in this area – again, I think comparing to benchmarks could be very useful. Just to share from my end, here are what I see (looking at these values week by week over a fairly long period):
      • Average words per search: 2.02. Maximum (of weekly averages) was 2.16 and minimum (of weekly averages) was 1.84. So pretty stable. So, on average, most searches use two words.
      • Average uses of each word (during any given week): 4.95. Maximum (of weekly averages) was 5.69 and minimum (of weekly averages) was 2.93. So a much wider variance than we see in words per search.
  • (The most obvious?) Top N searches for a given time period – I typically look at weekly data and, for this metric, I most commonly look at the top 100 searches and focus on about the top 20. Actions to take:
    • Ensure that common searches return decent results. If it does not show good results, what’s causing it to show up as a common search (it would seem that users are unlikely to find what they need)? If it does show what appear to be good results, does this expose specific issues with navigation (as opposed to the general issues observable from the metrics listed above)?
    • If a search shows up that hasn’t been in the top of the list, does that represent something new in your users’ work that they need access to? Perhaps a some type of seasonal (annual or maybe monthly) change?
  • Trending of all of the above – More useful than any of the above metrics as single snapshots for a given time period (which is what it seems like many engines will provide out of the box) is the ability to view trends over longer periods. Not just the ability to view the above metrics over longer periods but the ability to see what the metrics were, say, last week and compare those to the week before, and the week before that, etc.
    • I’ve mentioned a few of these, but comparing how the trend is changing of how many searches are performed each week (or month or quarter) is much more useful than just knowing that data point during any given time period.
    • One of the challenges I’ve had with any of the “Top N” type metrics (searches, words, etc.) is the ability to easily compare and contrast the top searches week to week – being able to compare in an easily-comprehended manner what searches have been popular each week (or month) over, say, a few month (quarter) period helps you know if any particular common search is likely a single spike (and likely not worth spending time on improving results for) or an indication of a real trend (and thus very worthwhile to act on). I have ended up doing a good bit of manual work with data to get this insight – anyone know of tools that make it easier?
  • Top Searches over time – another type of metric I’ve spent time trying to tweak is to understand what makes a “top search over an extended period of time”. This is similar to understanding and reviewing trends over time but with a twist.
    • Let’s say that you gather weekly reports and you have access to the data week by week over a longer period of time (let’s say a year).
    • The question is – over a longer time period, what are the searches you should pay attention to and actively work to improve? What is a “top search”?
    • A first answer is to simply count the total searches over that year and whichever searches were most commonly used are the ones to pay attention to.
    • What I’ve found is that using that definition can lead to anomalous situations like a search that is very popular for one week (but otherwise perhaps doesn’t appear at all) could appear to be a “top search” simply because it was so popular that one week.
      • To address this, what I do is to impose a minimum threshold on the # of reporting periods (weeks in my case) that a search needs to be a top search in order for it to be considered a top search for the longer time period. The ratio I use is normally 25% – so a term needs to be a top search for 25% of the weeks being considered to be considered at all. Within that subset of popular searches, you can then count the total searches.
      • Alternately, if you can, massage your data to include the total searches (over the longer time period) and total reporting periods in which the search occurs as two distinct columns and you can sort / filter the data as you wish.
      • The important thing is to recognize that if you’re looking to actively work on improving specific searches, you need to focus your (limited, I’m sure!) time on those searches that warrant your time, not find yourself spending time on a search that only appears as a popular search in one reporting period.
    • On the other hand, a search that might not be a top N search any given week could, if you look at usage over time, be stable enough in its use that over the course of a longer period it would be a top search.
      • This is the inverse of the first issue. In this case, the key issue is that you will need access over longer periods of time to all of the search terms for each reporting period – not just the top searches. Depending on your engine, this data may or may not be available.
  • Another important dimension you should pay attention to when interpreting behavior is seasonality. You should compare your data to the same period a year ago (or quarter ago or maybe month ago, depending on your situation) to see if there are terms that are popular only at particular times.
    • An example on our intranet is that each year you can see the week before and of the “Take your Kids to Work” program, searches on ‘kids to work’ goes through the roof and then disappears again for another year. Also, at the end of each year, you see searches on “holidays” go way up (users looking for information on what dates are company holidays and also about holiday policy).
    • This insight can help you anticipate information needs that are cyclical, which could mean ensuring that new content for the new cycle (say we had a new site for the Kids to Work program each year, though I’m not sure if we do) shows well for searches that users will use to find it.
    • It also helps you understand what might be useful temporary navigation to provide to users for this type of situation. Having a link from your intranet home page to your holiday policies might not be useful all of the time but if you know that people are looking for that in late November and December, placing a link to the policies for that period can help your users find the information they need.
  • Another area of metrics you need to be attention to are not found searches and error searches.
    • What percentage of searches result in not found searches for your reporting periods? How is that changing? If it’s going up, you seem to have a problem. If it’s stable, is it higher than it should be?
    • What are the searches that users are most commonly doing that are resulting in no results being found? Focus on those and work to ensure whether it’s a content issue (not having the right content) or perhaps a tagging issue (the users are not using expected words to find the content).
    • The action you take will depend on the percentage of not found results and also on the value of losing users on those not found.
      • On an e-commerce site, each potential customer you lose because they couldn’t find what they were looking for represents hard dollars lost.
      • On an intranet, it is harder to directly tie a cost to the not found search but if your percentage is high, you need to address it (improving coverage or tagging or whatever is necessary).
      • A relatively low “not found” percentage might not indicate a good situation – it might also simply reflect very large corpus of content being included in which just about any words a user might use will get some kind of result even if it’s not a useful result. More about that in my next post.
        • I’m not sure what a benchmark is for high or low percentage of not found, exactly. Does anyone know of any resource that might provide that?
        • On our intranet search, this metric has been very stable at around 7-8% over a fairly extended time period. That is not high enough to warrant general concern, though I do look for whether there are any common searches in this and there actually does not seem to be – individual “not found” results are almost always related to obvious misspellings and our engine provides spelling correction suggestions so it’s likely that when a user gets this, they click on the (automatically provided) link to see results with the corrected spelling and they (likely) no longer get the “no results” result.
      • Customizing your search results page for not found searches can be useful and provide alternate searches (based on the user’s search criteria) is very useful though it might be a very challenging effort.
    • What types of things might trigger an “error search” will depend on your engine (some engines may be very good at handling errors and controlling resources so as to effectively never return an error unless the engine is totally offline (in which case, it’s not too likely you’ll capture metrics on searches). Also, whether these are reported on in a way that you can act on will depend on your engine. If so, I think of these as very similar to “not found” searches. You should understand their percentage (and whether it’s going up, down or is stable), what are the keywords that trigger errors, etc. Modify your engine configuration, content or results display as possible to deal with this.
      • An example: With the engine we use, the engine tries to ensure that single searches do not cause performance issues so if a search would return too many results (what is considered “too many” is configurable but it is ultimately limited), it triggers an “error” result being returned to the user. I was able to find the searches that trigger this response and ensure that (hand-picked) items show up in the search results page for any common search that triggers an error.

That’s all of the topics I have for “basic metrics”. Next up, some ideas (along with actions to take from them) on more complex search metrics. Hopefully, you find my recommendations for specific actions you can take on each metric useful (as they do tend to make the posts longer, I realize!).

Additional Community Metrics

Tuesday, November 25th, 2008

My last several posts have been focused on various aspects of community metrics – primarily those derived from the use of a particular tool (mailing lists) used within our communities. While quite fruitful from an analysis perspective, these are not the only metrics we’ve looked at or reported on. In this post, I’ll provide some insights on other metrics we’ve used in case they might be of interest.

Before going on, though, I also wanted to highlight what I’ve found to be an extremely thorough and useful guide covering KPIs for knowledge management from a far more general perspective than just communities – How to Use KPIs in Knowledge Management by Patrick Lambe. I would highly recommend that anyone interested in measuring and evaluating a knowledge management program (or a community of practice initiative specifically) read this document for an excellent overview for a variety of areas. Go ahead… I’ll wait.

OK – Now that you’ve read a very thorough list, I will also direct you to Miguel Cornejo Castro’s blog, who has published on community metrics. I know I’ve seen his paper on this before, but in digging just now I could not seem to come up with a link to it. Hopefully, someone can provide a pointer.

UPDATE:  Miguel was kind enough to provide the link to the paper I was recalling in my mention above: The Macuarium Set of CoP Measurements.  Thanks, Miguel!

If you can provide pointers to additional papers or writings on metrics, please comment here or on the com-prac list.

With that aside, here are some of the additional metrics we’ve used in the past (when we were reporting regularly on the entire program, it was generally done quarterly to give you an idea of the span we looked at each time we assembled this):

  • Usage of intranet-based web sites – specifically, site visits and hits on a community’s site as track by our web analytics solution;
  • Intellectual assets produced – specifically, tracking those produced (or significantly updated) and published via one of our repositories;
  • Number of “anecdotes” captured for community members – that is, the one-off “pats on the back” that community members receive – this attempted to capture some of the softer aspects of community value;
  • Number of knowledge share events held – many communities commonly host virtual events (using one of several different webcasting tools) and we tracked those as well as any in-person events;
  • Attendance at community knowledge share events and playback of recordings of webcasts – an attempt to capture how impactful the events were on members;
  • White papers produced – a specific drill into the intellectual assets;
  • For most of these, we also provided insights on quarter-to-quarter change within communities and for the community of practice program overall to give community sponsors / leaders insight on which direction things were moving;
  • We also looked at our corporate wiki for some insights on a couple levels:
    • Using our community member lists, we knew who was a member of a community, so we could analyze content authoring within the wiki by that same group; this provided insight on how much community members contributed to this knowledge base;
    • Within our corporate wiki, authors have the ability to assign articles to categories; one set of such categories were the communities, so we reported on authoring activity and usage of wiki articles that were assigned a category corresponding to one of the communities of practice; this provided insight on the utility and interest in knowledge associated with the communites.
  • And, finally, we also reported another “softer” piece of data, which was to allow the communities themselves to highlight specific events, results, or issues for the communities.

This is my last planned post on community metrics for now. I will likely return to the topic in the future. I hope the posts have been interesting and also have provided food for thought for your own community programs or efforts.

Visualizing Knowledge Flow in a Community

Friday, November 21st, 2008

In my last post, I described some ideas about how to get a sense of knowledge flow within a community using some basic metrics data you can collect. I thought it might be useful to provide a more active visualization of the data from a sample community. As always, data has been obfuscated a bit here but the underlying numbers are most accurate – I believe it provides a more compelling “story” of sorts to see data that at least approximates reality.

I knew that Google had provided its own visualization API which provides quite a lot of ways to visualize data, including a “Motion Chart” – which I’d seen in action before and found a fascinating way to present data. So I set about trying to determine a way to use that type of visualization with the metrics I’ve written about here.

The following is the outcome of a first cut at this (requires Flash):

This visualization shows each of the lists associated with a particular community as a circle (if you hover over a circle, you’ll see a pop-up showing that list’s name – you can click on it to have that persist and play with the “Trails” option as well to see the path persist).

The default options should have “Cumulative Usage” on the Y axis, Members on the X axis, “Active Members” as the color and “Usage” as the size.

An interpretation of what you’re seeing – once you push play, lists will move up the Y axis as their total “knowledge flow” grows over time. They’ll move right and left as their membership grows / shrinks. The size of a circle reflects the “flow” at that time – so a large circle also means the circle will move up the Y axis.

It’s interesting to see how a list’s impact changes over time – if you watch the list titled “List 9″ (which appears about Sept 05 in the playback), you’ll see it has an initial surge and then its impact just sort of pulsates over the next few years. Its final position is higher up than “List 7″ (which is present since the start) but you can see that List 7 does see some impact later in the playback.

You can also modify which values show in which part of this visualization – if you try some other options and can produce something more insightful, please let me know!

I may spend some time looking at the other visualization tools available in the Google Visualization API and see if they might provide value in visualization other types of metrics we’ve gathered over time. If I find something interesting, I’ll post back here.



Measuring Knowledge Flow within a Community of Practice

Thursday, November 20th, 2008

In my series on metrics about communities of practice, I’ve covered a pretty broad range of topics, including measuring, understanding and acting on:

In this post, I’ll slightly change gears and present some thoughts on a more research-like use of this data. First, an introduction to what drove this thinking.

“Why do we need to provide navigation to communities? There’s nothing going in them anyway!”

A few years back as we were considering some changes in the navigational architecture on our intranet, I heard the above statment and it made me scratch my head. What did this person mean – there is nothing going on in communities? There sure seemed to be a lot of activity that I could see!

A quick bit of background: Though I have not discussed much about our community program outside of the mailing lists, every community had other resources that they utilized – one of the most common being a site on our intranet. On top of that, at the time of the discussion mentioned above, communities actually had a top spot in the global navigation on our intranet – which provided the typical menu-style navigation to top resources employees needed. One of the top-level menus was labeled “communities” and as sub-menu items, it included subset of the most strategic / active communities. Very nice and direct way to guide employees to these sites (and through them to the other resources available to community members like the mailing lists I’ve discussed).

Back to the discussion at hand – As we were revisiting the navigational architecture, one of the inputs was usage of the various destinations that made up the global navigation. We have a good web analytics solution in place on our intranet (the same we use on our public site) so we had some good insight on usage and I could not argue the point – the intranet sites for the communities simply did not get much traffic.

As I considered this, a thought occurred to me – what we were missing is that we had two distinct ways of viewing “usage” or “activity” (web site usage and mailing list membership / activity) and we were unable to merge them. An immediate question occurred to me – what if, instead of a mailing list tool, we used an online forum tool of some sort (say, phpBB or something similar)? Wouldn’t that merge together these two factors? The act of posting to a forum or reading forums immediately becomes different web-based activities that we could measure, right?

Given the history of mailing list usage within the company, I was not ready to seriously propose that kind of change, but I did set out to try to answer the question – Can we somehow compare mailing list activity to web site usage to be able to merge together this data?

The rest of this post will discuss how I went about this and present some of the details behind what I found.

The Basic Components

The starting point for my thinking was that the rough analogy to make between web sites and mailing lists is that a single post to a mailing list could be thought of as equivalent to a web page. The argument I would make is that (of course, depending on the software used), for a visitor to read a single post using an online forum tool, they would have to visit the page displaying that post. So our first component is

Pc = the number of posts during a given time period for a community

In reality, many tools will combine together a thread into a single page (or, at least, fewer than one page per comment). If you make an assumption that within a community, there’s likely an average number of posts per thread, we could define a constant representing that ratio. So, define:

Rc = the ratio of posts per thread within a community for a given time period

Note that while I did not discuss it in the context of the review of activity metrics, it’s possible with the activity data we are gathering to identify thread and so we can compute Rc.

Tc = total threads within a community for a given time period

Rc = Pc / Tc

Now, how do we make an estimate of how many page views members would generate if they visited the forum instead of having posts show up in their mailbox? The first (rough, and quite poor) guess would be that every member would read every post. This is not realistic and to get an accurate answer would likely require some analysis directly with community members. That being said, I think, within a constant factor, the number of readers can be approximated by the number of active members within the community (it’s true that any active member can be assumed to have read at least some of the posts – their own). A couple more definitions, then:

Mc = the number of members of a community at a given time

Ac = the number of active members within a community for a given time period

In addition to assuming that active members represent a high percentage of readers, I wanted to reflect the readership (which is likely lower) among non-active members (AKA “lurkers”). We know the number of lurkers for a given time period is:

Lc = the number of lurkers within a community over a given time period = (Mc – Ac)

So we can define a factor representing the readership of these lurkers

PRc = the percent of lurkers who would read posts during a given time period (PR means “passive reader”)

Can we approximate PRc for a community from data we are already capturing? At the (fuzzy) level of this argument, I would think that the percentage of active to total members probably is echoed within the lurker community to estimate the number of lurkers who will read any given post in detail:

PRc ~= Ac / Mc

The Formula

So, with the basic components defined above, the formula that I have worked out for computing a proxy for web site traffic from mailing lists becomes:

Uc = the “usage” of a community as reflected through its mailing list

= Pc * (Ac + PRc * Lc) / Rc

= Pc * (Ac + Ac / Mc * Lc) / Rc

= Pc * (Ac + Ac / Mc * (Mc – Ac)) / Rc

= (2 * Pc * Ac – Pc * Ac2 / Mc ) / (Pc / Tc)

= (2 * Ac * Tc – Ac2 * Tc / Mc)

So with that, we have a formula which can help us relate mailing list activity to web site usage (up to some perhaps over-reaching simplifications, I’ll admit!). All of these factors are measurable from the data we are collecting and so I’ll provide a couple of sample charts in the next section.

Some Samples

Here are a few samples of measuring this “usage” over a series of quarters in various communities.

As you will see in the samples, this metric shows a wide variance in values between communities, but relative stability of values within a community.

Small Community Usage Metric

Small Community Usage Metric

The first sample shows data for a small community. As before, I have obfuscated the data a bit, but you can see a bit jump early in the lifecycle and then an extended period of low-level usage. The spike represents the formal “launch” of the community, when a first communication went out to potential members and many people joined. The drop-off to low level usage shown here represents, I believe, a challenge for the community to address and to make the community more vital (of course, it could also be that other ways of observing “usage” of the community might expose that it actually is very vital).

The second sample shows data for a large, stable community – you’ll note that the computed value for “usage” is significantly higher here than in the above sample (in the range of around 30,000-40,000 as opposed to a range of 500-1,000 as the small community stabilized around).

Large Community

Large Community

How does this relate to the title of this post?

Well, after putting the above together, I realized that if you ignore the Rc factor (which converts the measurement of these “member-posts” into a figure purportedly comparable to web page views), you get a number that represents how much of an impact the flow of content through a mailing list has on its members – indirectly, a measure of how much information or knowledge could be passing through a community’s members.

The end result calculation would look something like:

Kc = the knowledge flow within a community for a given period

= (2 * Pc * Ac – Pc * Ac2 / Mc )

This concept depends on making the (giant) leap that the “knowledge content” of a post is equivalent across all posts, which is obviously not true. For the intellectual argument, though, one could introduce a factor that could be measured for each post and replace Pc (which has the effect of treating the knowledge content of a post as “1″) with the sum of that evaluation of each post across a community (where each post is scored a 0-1 on a scale representing that post’s “knowledge content”).

I have not done that analysis, however (it would be a very subjective and manually intensive task!), and, within an approximation that’s probably no less accurate than all of the assumptions above (said with appropriate tongue-in-cheek), I would say that one could argue that you could multiply Kc by a constant factor (representing the average knowledge content of a community) and have the same effect.

Further, if you use this calculation primarily to compare a community with itself over time, you likely find that the constant factor likely does not change over time and you can simply remove it from the calculation (again, with the qualifier that you can then only compare a community to itself!) and you are left with the above definition of Kc.

Validating this Analysis

So far, I’ve provided a fairly complicated description of this compound metric and a couple of sample charts that show this metric for a couple of sample communities. Some obvious questions you might be asking:

  • What’s the value in this metric? Is it actionable?
  • How valid is this metric in the sense of really reflecting “usage” (much less any sense of “knowledge flow”)?

To be honest, so far, I have not been very successful in answering these questions. In terms of being actionable – using this data might lend itself to the types of actions you take based on web analytics, however, there is not an obvious (to me) analog to the conversion that is a fundamental component of web analytics. It seems more likely an after-the-fact measure of what happened instead of a forward-looking tool that can help a community manager or community leader focus the community.

In terms of validity, I’m not sure how to go about measuring if this metric if “valid”. Some ideas that come to my mind at least to compare this to include:

  • Comparing this metric to the actual usage of a community’s web site (via our web analytics tool); do they correlate in some way?
  • Comparing this compound metric to the simpler metric of posts to the community’s mailing lists – how do these compare and why does (or does not) this compound metric provide any better insight?
  • Taking a different approach to this formula – I think understanding how this metric changes as you hold some parts constant and change others would help understand what it “means”.
    • For example, if membership and posts remain the same, but the # of different posters changes, what happens?
    • If posts active members change but total membership changes, what happens?

I’d be very happy to hear from someone who might have some thoughts on how to validate this metric or (perhaps even better) poke holes in what its failings are.

Summing Up

Whew! If you’re still with me, you are a brave or stubborn soul! A few thoughts on all of this to summarize:

  • I do believe that this type of analysis could be useful to understand the flow through a community over time; I think it needs significantly more research to get to a better formula, though the outline above could be a starting point;
  • I have not been able to really validate the ideas expressed here in any way except intuitively, so take with an appropriate grain of salt;
  • I think this type of analysis could also be applied in a variety of other contexts – use of a community Wiki, use of a community blog, attendance at “physical space” meetings, attending virtual knowledge share events, use of community workspaces, etc.; I have not tried this, yet, though;
  • With that last comment in mind, I believe that a key idea here is that this type of compound metric provides an avenue to combine the measurement of knowledge sharing across all of a community’s avenues – raising the possibility of providing something like a “Dow Jones Index” for a community’s knowledge sharing – perhaps collapsing down to a single, measurable quantity that you can track over time.
    • And, yes, I do recognize that such a metric is, at best, on shaky ground and likely not really supportable. I raise this idea because I was once asked to generate a single “knowledge sharing index” that would cover the corporation and this type of analysis could lead in that direction. (For the record, when faced with that question, we resisted spending time

Community of Practice Metrics and Membership, Part 5 – Performance Management

Friday, November 14th, 2008

My recent posts have been quite long and detailed with examples in terms of how we have been able to understand and analyze community membership and activity for our community of practice initiative. This post is less focused on numbers and more focused on a particular use of this data in a more strategic manner.

Performance Management

Within my employer, we have a (probably pretty typical) performance management program intended to address both career development (a long term view – “what do you want to be when you grow up?”) and also performance (the shorter term view – “what have you done for me lately?”)

We also have an employee management portal (embedded in the larger intranet) where an employee could manage details about their job, work, etc., including recording their development goals (and efforts) and performance (objectives and work to achieve those).  Managers have a view of this that allows them to see their employees’ data.

Communities and Performance Management

As we worked to drive the communities initiative and adoption of communities of practice as a part of the corporate culture, one of the questions that commonly came up was, “How do these communities contribute to my performance? How can I communicate that to my manager?” That could be asked from the perspective of career development (how can my involvement in communities help me grow?) and also for performance (if I am involved in a community, how does it help me achieve my objectives that are used to measure my performance?)

These are all pretty easily answered, but in an objective sense, we found that managers had a challenge in talking with their employees about their involvement in communities and that part of that challenge was that managers did not necessarily “see” their employee’s community involvement (if they were not part of the same community).

Given that we now had our definition of a community member is and also what an active community member is, it seemed like we could provide some insight to managers from this data and embed that in the employee management portal.

As we were working through this, we found that there was going to be a new component added to the employee management portal labeled “My involvement”, which was intended to capture and display information about how the employee has been involved in the company at large – things like formal recognition they’ve received or recognition they’ve given to others (as part of our employee recognition program) or other ways in which they’ve been “involved”.

This seemed like a perfectly natural place in which we can expose insights to employees and their managers about an employee’s involvement in communities of practice!

So we had a place and the data – it became a simple matter of getting an enhancement into the queue for the employee management portal to expose the data there. It took a few months, but we managed to do that and now employees can view their own involvement and managers can view their employees’ involvement in our communities. The screenshot below shows the part of the employee management portal where an employee or manager can see this view (as with other images, I’ve obscured some of the details a bit here):

Community Involvement in Employee Management Portal

Community Involvement in Employee Management Portal

The Value?

So, what has been the value of this exposure? How has it been used?

While this helps to make some of the conversations between manager and employee about community involvement a bit more concrete, we do recognize that this is still a very partial picture of that involvement. There are many ways in which an employee can be involved in and add value to and learn from a community that goes beyond this simplistic data. (I’ll write more about this “partial picture” issue in a future post.)

That being said, providing this insight to managers has proved very valuable to engender discussions between a manager and an employee about the employee’s community involvement – what they have learned (how it has effected their career development) and also how it might have contributed to their performance. This discussion, by itself, has helped employees demonstrate their growth and value in ways that otherwise could have been a challenge.

For managers, this gives them insight into value their employees provide that otherwise would have been difficult to “see”.

For the community of practice program, this type of visibility has had an ancillary effect of encouraging more people to join communities as I suspect (though can not quantify) that some managers will ask employees about the communities of which they are a member and (more importantly in this regard) the ones in which they are not a member (but which they might be, either by work focus or interest).

Overall, simply including this insight builds an organizational expectation of involvement.