I was recently asked by a former co-worker (Ray Sims) for some suggestions around requirements that he might use as the basis for an evaluation of search engines. Having just gone through such an evaluation myself, and also having posted here about the general methodology I used for the evaluation, I thought I’d follow that up with some thoughts on requirements.
If you find yourself needing to evaluate a search engine, these might be of value – at least in giving you some areas to further detail.
I normally think of requirements for search in two very broad categories – those that are more basically about helping the user doing the search (End User Search Requirements) and those that are more directed at the people (person) who is responsible for administering / maintaining the search experience (Administrator Requirements).
End User Search Requirements
Administrator Requirements
A few months back, I was asked to evaluate my company’s current solution solution against another search engine to try to determine if it would be worthwhile to implement a new solution. I’ve done package / tool evaluations in the past but I felt that there was something a bit different about this in that I needed to somehow integrate a fairly standard requirements-based evaluation with a measure of quality of the search results themselves, which are not easily expressed as concrete requirements.
So I set about the task and asked the SearchCop for suggestions about how to do an evaluation of the search results in a meaningful and supportable way. I received several useful results, including some suggestions from Avi Rappaport, about a methodology to go about identifying a good representation of search terms to use in an evaluation.
With my own experiences and those of the SearchCoP in hand, I came up with a process that I thought I would share here.
I split the assessment into two distinct parts. The first was a traditional “requirements-based” assessment which allowed me to reflect support for a number of functional or architectural needs I could identify. Some examples of such requirements were:
The second part of the assessment was to measure the quality of the search results.
I’ll provide more details below for each part of the assessment, but the key thing for this assessment was the have a (somewhat) quantitative way to measure the overall picture of the effectiveness and power of the search engines. It might be possible to even quantitatively combine the measure of these two components, though I did not do so in this case.
For the first part, I used a simplified quality functional deployment matrix – I identified the various requirements to consider and assigned them a weight (level of importance); based on some previous experiences, I forced the weights to be either 10 (very important -probably “mandatory” in a semantic sense), a 5 (desirable but not absolutely necessary) or a 1 (nice to have) – this provides a better spread in the final outcome, I believe.
Then I reviewed the search engines against those requirements and assigned each search engine a “score” which, again, was measured as a 10 (met out of the box), a 5 (met with some level of configuration), a 1 (met with some customization – i.e., probably some type of scripting or similar, but not configuration through an admin UI) and a 0 (does not meet and can not meet).
The overall “score” for an engine was then measured as the sum of the product of the score and weight for each requirement.
This simplistic approach can have the effect of giving too much weight to certain areas of requirements in total. Because each requirement is given a weight, if there are areas of requirements that have a lot of detail in your particular case, you can give that area too much overall weight simply because of the amount of detail. In other words, if you have a total of, say, 50 requirements and 30 of them are in one area (say you have specified 30 different file formats you need to support – each as a different requirement), then a significant percentage of your overall score will be contingent on that area. In some cases, that is OK but in many, it is not.
In order to work around this, I took the following approach:
This approach gives you a score for each engine between 0 and 100 and also gives each category a roughly equal effect on the total score.
If you are looking for some insights on categories of requirements you might want to include in your evaluation, I provide some of my thoughts in a subsequent post.
To measure the quality of search results, I took Avi’s insights from the SearchCoP and identified a set of specific searches that I wanted to measure. I identified the candidate searches by looking at the log files for the existing search solution on the site and pulling out a few searches that fell into each category Avi identified. The categories included:
Going into this, I assumed I did not necessarily know the “right” targets for these searches, so I enlisted some volunteers among a group of knowledgeable employees (content managers on the web site) who could complete a survey I put together. The survey included a section where the participant had to execute each search against each search engine (the survey provided a link to do the search – so the participants did not have to actually go to a search screen somewhere and enter the terms and search – this was important to keep it somewhat simpler). The participants were then asked to score the quality of the results for each search engine (on a scale of 1-5).
The survey also included some other questions about presentation of results, performance, etc. (even though we did not customize search result templates or tweak anything in the searches, we wanted to get a general sense of usability) and also included a section where users could define and rate their own searches.
The results from the survey were then analyzed to get an overall measure of quality of results across this candidate set of searches for each search engine – basically doing some aggregation of the different searches into average scores or similar.
With the engines we were looking at, the results were that one was better on the administration / architectural requirements and the
other was better on the search results – which makes for an interesting decision, I think.
The key takeaway for me from this process is that it is at least quantitative – one can argue over the set of requirements to include, or the weight of any particular requirement or the score of an engine on a particular requirement. However, the discussion can be held at that level instead of a more qualitative level (AKA “gut feel”).
Additionally, for search engines, taking a two-part approach ensures that each of these very important factors are included and reflected in the final outcome.
In the case of my own execution of this approach, I know there are some issues (the general methodology is sound, I believe). Including (in no particular order):
As for the survey / search results evaluation: