User:TJones (WMF)/Notes/Relevance Lab

October 2015 — See TJones_(WMF)/Notes for other projects. Questions and comments are welcome on the Talk Page!

Relevance Lab edit

The primary purpose of the Relevance Lab is to allow us[1] to experiment with proposed modifications to our search process and gauge their effectiveness[2] and impact[3] before releasing them into production, and even before doing any kind of user acceptance or A/B testing. Also, testing in the relevance lab gives an additional benefit over A/B tests (esp. in the case of very targeted changes): with A/B tests we aren't necessarily able to test the behavior of the same query with 2 different configurations.

  1. Appropriate values of "us" include the Discovery team, other WMF teams, and potentially the wider community of Wiki users and developers.
  2. "Does it do anything good?"
  3. "How many searches does it affect?"


Because there are so many moving parts to the search process, different use cases can have significantly different infrastructure needs, and the complexity of the interplay among the various use cases also needs to be handled.

At the highest level and for the simplest case of comparing a single change against a baseline, we need to be able to:

  • specify a set of baseline (A) queries to run
    • optionally specify a set of modified (B) queries to run
  • specify a baseline (A) search configuration
    • optionally specify a modified (B) search configuration
  • automatically compare the results of A and B and generate summary statistics
    • automatically identify and manually inspect a subset of the differences between A and B

Queries edit

Sometimes the proposed change would entail modifying queries (e.g., dropping stop words, dropping quotes, removing question components, eliminating inappropriate wildcards, dropping the word "quot", automatically translating queries into the target language of the wiki before searching, etc.), with the rest of the search configuration the same.

Rather than go to all the trouble of implementing and/or integrating a fully working version of a particular query modification algorithm into CirrusSearch, we can test the effect it would have by running the modified queries directly, and comparing the results generated by their unmodified counterparts. (See Query Sets and Query Mungers below.)

Query sets edit

In the simplest case, we could have two sets of queries, in corresponding order (i.e., the first in Set A will be diffed against the first in Set B, etc.). For example:

Set A:

  • "first man on the moon"
  • what is the "house of representatives"
  • "laverne AND shirley"

Set B:

  • first man on the moon
  • what is the house of representatives
  • laverne AND shirley

We run A, we run B, and we compare the differences in the results.

Query mungers edit

Alternatively, we could have one set of queries, and a query munger (say a runable script following a standard API—possibly as simple as read from STDIN, write to STDOUT ) that modifies the query set and compares against that. For example:

Set A:

  • "first man on the moon"
  • what is the "house of representatives"
  • "laverne AND shirley"

Query Munger:

  • perl -pe 's/"//g;'

This has the advantage of testing the actual proposed munging method and potentially accounting for the runtime of the query munging. The disadvantage is that setting a runnable script loose is a potential security concern.

Targeted query set edit

During development, it may make sense to test a modification against a targeted query set so that more use cases are represented in a smaller, more manageable set that can be more quickly run many times.

For example, if testing the effects of removing quotes or modifying the slop parameter, a corpus of queries with quotes is better than a generic query corpus because most queries don't actually have any quotes in them. Such a corpus could be dozens to thousands of queries.

Tagged queries edit

In a more advanced set up, we could tag queries with labels like: ambiguous query, poor recall, questions, with quotes, with special syntax, etc. This would be particularly useful in a regression test set, which could also be used for exploration of proposed changes. The comparison summary report could also show which tags were most affected by a prosed change, with links to examples.

Regression test set edit

Conversely, it makes sense to have a generic, representative query set that can be run as a regression test, to make sure that changes don't have significant unexpected impact on queries outside their intended focus. Such a corpus could be hundreds to hundreds of thousands of queries.

Gold standard corpus edit

A gold standard corpus (i.e., one with known desired results) makes a good regression test set, but is expensive to create. Two (complementary) approaches have been suggested:

  • SME-created corpus: an SME (a Subject Matter Expert, e.g., member of the Discovery team, community volunteers, etc.) compares a query and the results it generates, and judges which, if any results, are, say, generally good matches (should be in the top-N), bad matches (should not be in the top-N), "meh" matches (can take it or leave it), or the one true preferred match. Difficulties include level of effort for SMEs, and some ambiguity, esp. for "difficult" queries. The process could be managed ad hoc, where only the current top-N results are reviewed by an SME (or, if we care most about top-N, SMEs could preemptively review the top 2N, on the assumption that radical movement is uncommon). Then, any new results brought up by a change in the search process would need review before they could be used to score the changes. Further options include using only one SME to review each item (to save effort), or having 2+ SMEs review each item and then more carefully reviewing disagreements (to improve accuracy); given the nature of our work (low impact from a small number of errors or inconsistencies), one review is probably sufficient in almost all cases.

Unfortunately, such a corpus would require annotation tools, and it would be a small corpus compared to the scale of our A/B testing; there's also a risk of the corpus getting stale unless you continually add to it (and optionally age out older queries). However, it does give a more consistent evaluation/scoring from test to test, and could allow for automatic parameter setting over a largish parameter space if the set is large enough to be divided into training and testing subsets.

Note that it might make sense to pre-filter likely bot traffic from the gold standard corpus. It might also be nice if the annotation tool allowed for dropping queries as either bot traffic or junk (e.g., noting that "hfhfdjkhfjsdkhfjdkshjkkkkkkkkkkkkkk" is not a real query).

  • User-created corpus: we track which results are clicked on by the user for a given query, and assume, say, that a click implies relevance, and average across multiple users. Difficulties include modelling user actions/intentions (do we track time on clicked-to page?, for example), and finding multiple users with the same queries, which limits query diversity. Should we highlight queries that get few clicks as ones without good answers?

We may be able to extract this information from the User Satisfaction logs without any additional front end work. Another advantage of this approach is that it allows non-binary answers (i.e., if for query A, 95% of users click the second result, that's the best result; if for query B, 35% click the first result and 40% click the second, there isn't really a clear best result, but there is possibly a preferred order, and scoring can take that into account).

Use case—translating queries edit

As an example, suppose we wanted to test translating queries into the language of the current wiki (e.g., translating non-English queries into English on enwiki). (Disclaimer: this may or may not be feasible, but it's an easy to understand example.)

Rather than do all the work of finding/developing, testing, and integrating language detection and machine translation libraries into CirrusSearch, we could build a targeted set of a few hundred to a thousand manually collected non-English queries, manually identify the languages the queries are in, use high-quality online machine translation to translate them to English, and then compare the original untranslated queries to the translated ones. Some, like Росси́я, القذافي‎, and Ολυμπιακοί αγώνες will return the same results, while others will be junk queries in any language.

While this isn't a realistic test, it sets an upper limit on positive impact a system could have, given human-level language identification and state-of-the-art machine translation. If the upper limit is not very good, we can stop. If it seems promising, then we could start testing language detection software and license-compatible machine translation libraries.

Again, rather than do all the integration work, we could run the libraries outside CirrusSearch and see how they perform, either as a "manual" process, or with a query munging command line program (to test performance).

If we're happy with the results on our sample of non-English queries, we would then run a regression test against a sample of all queries, to make sure, for example, that the language detector doesn't have too many false positives.

Search configurations edit

This is a very broad umbrella that covers lots of different kinds of changes, which may have significantly different infrastructure requirements to run, and may have different analysis requirements as well. Four different use cases are presented below.

Parameter/configuration changes edit

There are simple configuration changes that don't require any additional code to run (e.g., adjusting the slop parameter, or enabling an option). These could be run against the same index, potentially at the same time if the cluster hosting the index is up to it. (This also applies to modified queries, above.)

Modified code edit

Sometimes a change requires modifying code, rather than just changing configuration. There are a few ways to handle this, including:

  • Modify the code being run, e.g., by specifying a Gerrit patch, which has to be downloaded, merged, and run against the appropriate index. We need to insulate such changes from each other, either by giving each change its own sandbox to work in (possibly on the tester's own laptop), or by scheduling jobs so that they are run in sequence.
  • Merging the modified code into, say, a branch specific to Relevance Lab, and controlling it by a parameter/config setting, and keeping the default behavior the same, thereby reducing this to the previous case (config change). This adds complexity in terms of maintaining the branch, and has the potential of some changes leaking onto others. However, it may require fewer resources.

We don't want dealing with modified code to cause problems for simple query or config modifications, so we need to either sandboxing or job scheduling or some other mechanism to keep different test scenarios separate.

Clearly there's lots to be learned here from Gerrit (good and bad), and I need help fleshing out the technical details (esp. limitations and hardware requirements). (?Erik)

Modified indexes edit

Sometimes we have changes that require modifying or adding indexes—like using a reverse index so we can catch typos in the first two letters of a word.

David detailed two variations:

  1. same document model but change in the mapping (reverse field): in-place reindex needed. We should just need to adapt our maintenance scripts to allow the creation of multiple indices (today the script will fail if it detects 2 versions of the content index).
  2. change in the document model or the way we parse input data (new field, better parser for opening text extraction, ...). I have no clue here, it requires access to prod db and will perform a full rebuild (5 days for enwiki)

This still requires additional thinking through. (?Erik)

Scope of indexes edit

Because of hardware limitations, we may not be able to host all the wikis at once. We may also need to be able to swap out indexes (e.g., drop English and add Finnish to test a Finnish morphological analyzer).

We should be able to specify which index queries should be run against (perhaps with a default of enwiki), at the level of the comparison (i.e., for both A and B), for each query set (one for A, one for B—I've actually done this), or at the level of individual queries (possibly with a query-set specification acting as a default; e.g., based on language detection results). I think at the level of each query set with individual query overrides is a good option.

Results munging, etc. edit

There may be cases in the future that do something not covered by these cases, especially if we change the kinds of results were giving. Such changes and changes to the Relevance Lab need to go hand in hand.

As an example, consider a method of results munging that includes highlights from infoboxes of top results, so that the actors of a film might be listed after the film itself, particularly if the query includes a film title and the word actor. In this case, perhaps we'd want to give the actors the same data-serp-pos value (the rank with in the search engine results), or a related one (e.g., 2.1, 2.2, 2.3, etc for actors listed along with a film that ranked #2).

Generating diffs and stats edit

When we've run two different versions of our query/search config combination, we want to see what changed.

Effectiveness edit

If we have a gold standard corpus, we can automatically compute whether the net change is better or worse—more specifically, we can compute recall, precision, F1 (and/or maybe F2 since recall is probably more important), Normalized DCGOkapi BM25 [1], and/or whatever other metrics seen useful.

See also "Inspecting changes" below.

Impact edit

Even without gold standard results, we can measure many useful changes automatically:

  • the number of queries with zero results
  • the number of queries with changes in order in the top-N (5?, 10?, 20?) results
  • the number of queries with new results in the top-N results
  • the number of queries with changes in total results
  • a heatmap of the overall shift in ranks (e.g., how many #1s fell to #5, how many #37s rose to #2, etc.)
  • etc.

Obviously, changes that have almost no effect on a targeted set of queries probably aren't worth deploying without improvement, while changes that effect 94% of queries in a regression test set need to be very carefully vetted.

Performance edit

We can also get summary performance statistics for each run. For example:

  • A: 2871 queries ran in 3.5s, 0.00122s/query
  • B: 2871 queries ran in 350.0s, 0.12191s/query

Big jumps in performance would need to be taken with a grain of salt—competing jobs on the same cluster, non-production-like limitations of memory size or disk speed, or the phases of the moon could affect performance—but serve as a useful warning flag.

Search config diffs edit

Search config diffs should be noted (e.g., specifying which index a file was run against, and which Gerrit patches were used, the names of runSearchConf.json variants used) and where possible provided as actual diffs (e.g., diffs in runSearchConf.json variants).

Query string diffs edit

The number of query strings that differ between query runs should also be noted. For example, when testing quote stripping against a regression test, it would be helpful to note that only about 1% of queries have quotes in them, so 99% of queries are unchanged—as expected. If 47% of the queries have changed, then maybe your quote stripping code is a little over eager!

Inspecting changes edit

For any category of change, in whatever direction, it should be possible to inspect examples of the change. For example, if we see that 137 queries went from zero results to some results, 10 went from some results to zero results, and 17 had results move in or out of the top 5, we should be able to click on examples to get a side-by-side list of results from the A case and the B case with diffs highlighted.

It isn't always possible (or necessary) to inspect all changes. If 20 out of 20 random examples are terrible, it's probable that most of the changes are not good, and you should rethink what you are trying to do.

One query set / one config edit

It's also possible that our "A" and "B" query sets and configs are the same, so that no diff is needed.

One reason for this could be that we want to see how a particular set of queries behaves in the default case and gather statistics on that set; for example, we could run a bunch of non-English queries against enwiki, or a collection of queries with "quot" in them, just to see how many get any results.

Another case would be to generate a standard baseline to be used to compare other variations against (see below), so we don't want the results of this run, but rather we want it handily pre-computed so we can compare it some change (see below).

We should be smart enough to allow the system to run one batch of queries and generate stats without having to have a second one to compare to at the moment.

Running against an existing baseline edit

In addition to being able to specify a second set of queries/configs to run and compute diffs against, it's useful (and efficient) to be able to compare against an existing run.

For example, if we want to do a multiway comparison of the performance of various slop values, say, 0 (the default), 1, 2, 3 (reasonable cases), and 100 (a limiting case to gauge the maximum possible impact). We can run A=Slop0 against B=Slop1, then reuse A=Slop0 against B=Slop2, B=Slop3, and B=Slop100, all without re-running Slop0 (meaning 5 sets of queries run, rather than 8).

Later, when deciding between Slop1 and Slop2, we should be able to just run the diff process on Slop1 results and Slop2 results, which would be very quick, and would highlight the cases where the difference in slop value actually mattered.

In another example, we are testing the effects of removing quotation marks, so A=Baseline and B=WithoutQuotes. The results look good, but we realize we forgot to remove “smart quotes”, so we can run another diff, but with A=WithoutQuotes and B=WithoutQuotes2, so the only difference we see is the effect of removing smart quotes in B, since all the regular quotes will already have been removed in A.

So, we should be able to kick off a diff for some A and B, and be smart enough not to run A or B if it already exists, and use stored intermediate results to prepare a diff summary.

Security edit

The more open the Relevance Lab is (and we want it to be open!) the more we have to consider issues that broadly fall under the heading of security.

Queries as personally identifiable information edit

Unsanitized user queries are considered to potentially be personally identifiable information (PII). It's clear from search logs that people accidentally search Wikipedia all the time, and may very well search for their own or other people's personal information (names, usernames, email addresses, physical addresses, and phone numbers all show up in the logs).

There are ways of automatically sanitizing queries, but none are 100%, and too much sanitization can skew a query sample (e.g., by removing all names, including those of celebrities that people search for all the time, which are not PII). Manually sanitizing queries can be more accurate, but is also more time consuming.

We need to come up with some way to safely handle queries, especially if the Relevance Lab is running on the labs cluster and is not considered safe for PII.

Vagrant edit

One option would be to disable certain functions in labs that display queries (such as inspecting results), and still leave basic functions available, like summary statistics. To enable full functionality against the full wiki indexes, we could disable standard logging and pipe queries over SSH from an appropriately secure server or laptop.

A properly configured Vagrant role could set all this up in such a way that the only shared portion of the Relevance Lab in the wmflabs cluster are the indexes (and even allow one to point to other indexes). This would not allow ready sharing of certain resources (like recent regression test baseline runs), but would deal with lots of other issues, like sandboxing query mungers and PII. It's not clear to me what level of code customization would be possible with such a configuration.

This seems like the most reasonable path to supporting community invovlement in using the Relevance Lab.

Community invovlement edit

There are lots of ways that people outside the Discovery team can use the Relevance Lab to help improve our search process relevance. The community as a whole has a significantly larger pool of resources than the Discovery team alone—including everything from specific language skills to just having lots more eyeballs on search results.

  • Being able to generate and run a targeted set of queries to demonstrate an issue. For example, to show that queries with quotes do worse than queries in general, anyone could run a set of queries with quotes and generate stats. Or better, if someone has an idea on how to improve the situtation (e.g., remove the quotes): run the same queries without quotes and generate diff stats.
  • Helping with languages outside our area of expertise. I don't think we have any particularly fluent German speakers on the Discovery team (and if we do, there are plenty of languages we don't have, and I know enough to give examples of possible problems in German). We could have someone look at results from German queries and point out why they think certain queries aren't improving (e.g., because we don't handle certain compound nouns well).
  • Tomasz has also suggested something like the Netflix Prize (though without the large cash prize, obviously). Given a relevant annotated corpus, we could allow anyone to offer improvements that maximize performance on that corpus. If we wanted to be competitive and/or discourage overfitting, we could set up a mechanism that would allow people to run their config against a separate evaluation corpus and then have the results scored without providing the annotations needed to do the scoring. Doing this on a sufficiently large scale to make cheating too difficult probably isn't worth it, but such a mechanism would allow people to test their configs without fear of overfitting.

Tasks edit

Everything here is a straw man proposal, and happily subject to revision, adding of details, removal of MVP features, and other random improvements and modifications.

Minimum Viable Product (MVP) edit

Start with a simple command line interface that allows the user to specify a comparison run. This should be reasonable for an MVP, to get it done in about a week, but should be expanded into some sort of basic html interface (It doesn't need to be fancy at all. See "Crude mockups" below.)


A comparison run consists of two query runs, a baseline and a delta, the resulting stats and diffs on them, a label/name, optional description text, and the current date.


A query run consists of a specified query file, a search config, a label/name, optional description text, and the current date.


A search config consists of a json dictionary that will override global variables within mediawiki.


This CLI application should know how to set up an SSH tunnel between your local vagrant instance and the hypothesis testing cluster in labs. Likely that will require copying your ssh private key into the vagrant directory or some such.

Each run will generate a new directory which will contain all information about the run. Both the A and B sets of queries are copied into this directory. runSearch.php is run once for each wiki and outputs a json file for each which contains a json string per line. Each json string represents the result of one query.

CLI options to provide:

  • Specify the comparison run: give it a name, optional description, and specify two query runs.
  • Specify a query run: give it a name, and optional description, and:
    • Queries: specify a query file (for query run B, any automatic munger will have been run manually)
    • Search config: specify variant runSearchConf.json
  • Similarly specify second query run.
  • Indexes: default index is enwiki (just to start)
  • Run:
    • 1) Execute runSearch.php with appropriate runSearchConf.json for set A
    • 2) Execute runSearch.php with appropriate runSearchConf.json for set B
    • 3) generate diffs and stats
  • Diffs and Stats: need a tool to run on two sets of results
    • Compute zero results rate for each set of results
    • Note diffs in impact stats (right now, just zero results rates) and get a list of examples of change in any direction.
    • Save diffs and stats to a file
  • Internals: locally have a designated relevancelab/ directory
    • relevancelab/queryruns/ has a subdirectory for each query run, e.g., relevancelab/queryruns/baseline151012/, with:
      • info.txt (or info.xml) file with description, and date, name of default index, Gerrit patch ID (future), and baseline code version
      • query results stored to file system as a single file. We could do some post-processing in the future, but for now this will simply be a file per query set containing many lines. Each line is a json dictionary containing result set information.
    • relevancelab/comparisons/ has a subdirectory for each comparison run, e.g., relevancelab/comparisons/quotetest151012/, with:
      • info.txt (info.xml) file with description, date, and name/directory of query runs being compared
      • summary stats files (TBD: one summary stats file for everything, or snippets for each, or what?)
      • diff files: includes links to examples of changes for each stat, and the data needed to display diffs between the two query runs.
  • Corpora: create targets sets manually if needed, create relevant regression test sets manually as needed, maybe share on stats1002 or fluorine for now

MVP Tasks edit

Rather than duplicate documentation and have it get out of sync, Phabricator tickets are linked below.

More Improvements! edit

Since we're starting with a command line interface to work out the kinks, the next step is to put a super simple web-based UI wrapper around it. After that, there's lots more to add.

Web-based UI edit

We're going to start with a CLI version of the needed tools, and we can quickly change them to a UI based on HTML/PHP/MediaWiki extension.

  • UI: view comparison summary (appropriate URL will fetch and display details from relevancelab/ directory):
    • comparison details (name/label, maybe description, names of query runs)
    • summary stats (time to run, time per query, query run metrics (i.e., zero results rate), number of queries that differ)
    • search config details (link to diff of runSearchConf.json variants)
    • diff examples (examples of queries with changes in query run metrics)
  • UI: diff viewer (given a query, look up results from both query runs, and display in useful diff format)
    • See "Crude mockups" below.
Crude mockups edit

These mockups are pretty crude, and show a few features that might not be included in the MVP. A picture is worth a thousand words, even if some of them are slightly inaccurate! (N.B.: Original plans called for changes to LocalSettings.php, but using runSearchConf.json is better. Swap as needed.)

In this scenario, we're testing the effect of removing quotes from queries. We've already run a comparison, but afterwards we noticed errors in the delta query file, so we need to re-run the comparison. Since the baseline query run is the same, we re-use it, rather than re-run it.

 
Comparison launch mockup.

We choose the existing baseline "Quotes Baseline 151012", and its info is populated in the form, and is not editable. We choose a "[new]" delta and give it a name "Quotes Test 151012b" (...b because we already ran one today and it didn't work out). The query file is different in the delta because here the only difference is the query strings, which have been pre-munged. LocalSettings.php (should be runSearchConf.json) isn't affected, so it's the default in both cases.

 
Comparison summary mockup.

After everything has run, this is the summary page for the comparison run. It shows the stats for the baseline and delta (note that the delta took 10x as long to run!), and the metrics (at this point, just zero results rate, which went down by 2.2%) Not shown: the number of query strings that differ between query runs.

Diffs are available.

  • The Diff Viewer lets you browse all queries, including those with no diffs. Detailed UI TBD.
  • runSearchConf.json Diff (even though it says LocalSettings.php Diff) shows the diffs between the runSearchConf.json files (in this case there are none).
  • Then there are specific diffs for each metric (still just zero results rate). Down arrows indicate ZRR decreased (i.e., we got some hits when there were none); up arrows indicate ZRR increased (i.e., we used to get hits, now we get none). What's the right icon? Arrows or +/- to indicate increase/decrease? Thumbs up/down to indicate better/worse (since sometimes an increase is better, sometimes worse)?

(These examples are unrealistic, since ZRR will almost always go down when quotes are removed. Hence "crude mockups".)

 
Comparison Diff View mockup.

Now we're in the diff viewer. Either browsing, or having clicked on a specific example from the metrics section.

(Not that this example does not correspond to the previous summary mockup, since removing quotes has, more realistically, given more results. The results are fudged, too, to show more interesting diffs. There should probably be more info on top, too, showing what comparison run we're looking at. Diffs may not include snippets of results or bolding of search terms. Just trying to present a flavor of the diff view. Hence "crude mockups".)

Even more improvements! edit

In priority order, based on a mysterious mix of complexity, impact, desirability, and dependencies, subject to much re-ordering. Big things are tagged [Project], and may (probably) require more planning.

  • Tools: a Relevance Lab portal page should have link to launch, link ot list of available comparison runs, and link to list of available query runs
  • Tools: allow the selection of an existing query run for either baseline or delta
    • Run: 2) & 3) skip as applicable


  • Indexes: specify index for each query file
    • Run: 2) & 3) ... connect to appropriate index ...


  • Diffs and Stats: support more metrics: the number of queries with changes in order in the top-N (5?, 10?, 20?) results
  • Diffs and Stats: think about additional useful metrics, describe and prioritize them. Some random brainstorming:
    • No change: In some cases, it might be good to look specifically at queries that show no change, esp. in a targeted set, to figure out what we are failing to account for.


  • Tools: check for existing run names and suggest improved names (i.e., add a letter on the end)


  • Queries: specify query file A and query munger to generate query file B
    • Run: 0) run query munger on file A with output to file B


  • Indexes: query index server in labs for available indexes, provide list for file-level specification


  • Diffs and Stats: support more metrics: the number of queries with new results in the top-N results
  • Diffs and Stats: support more metrics: the number of queries with changes in total results


  • Tools: tools should be smart enough to check that a query run exists and do the right thing (esp. when A=B); if the diff tools are quick enough, running them when A=B to get stats (speed and ZRR, for example) probably doesn't matter; but we shouldn't re-run all the queries against the index in that case.


  • Search config: adapt as needed to allow results munging
    • Diffs and Stats: may need to modify diffs because the same result may be a miss in one case, and a hit in another
    • (This might be higher than it deserves because I really want to work on this!)


  • Indexes: support for query-by-query index specification
    • (?Erik) Should these be batched by index or can we easily change indexes on the fly?


  • Search config: support Gerrit patch specification [Project]
    • lots of difficulty with dealing with conflicts, sandboxing, etc. Details TBD.


  • Vagrant support: add the necessary bits to a vagrant role to allow anyone to use the index server [Project]


  • Diffs and Stats: support more metrics: a heatmap of the overall shift in ranks (e.g., how many #1s fell to #5, how many #37s rose to #2, etc.)


  • Tools: pre-populate forms when choosing an existing query run (or hide it until pre-populating is possible)
  • Tools: auto-add the date part of name


  • Search config: add support for using different indexes [Project]
    • lots of details TBD.


  • Corpora: Regression test set—either
    • Create one or more standard regression test sets (e.g., enwiki, multi-wiki, etc), and a canonical place to keep them (with appropriate PII protection), or,
    • Create a method for creating a regression test set on the fly (e.g., specify a list of wikis, the number of queries to sample from each, and a date range)
    • The ability to filter likely bot traffic in either case would be nice, too!


  • Corpora: Gold Standard Corpora [Project]
    • develop schema for recording annotations
      • query
      • set of results
        • result ID
        • SME quality: {good,bad,meh,onetrue}
        • User click quality: % (of clicks in some sample)
      • etc.
    • SME-based annotations
      • create annotation tools to run query and allow results to be tagged
      • allow additional (not returned) results to be added (e.g., "ggeorge clooney" returns nothing, but clearly "George Clooney" is the desired result).
      • pre-filter likely bot traffic from corpus, and allow annotators to indicate bot/junk queries, which are dropped or ignored.
    • User-based annotations
      • build tool to extract click info from logs
    • Gold standard scoring tools
      • Given a set of search results for a corpus and the annotated results, generate various scores:
        • recall, precision, F2, etc, for top result, top 3, top 5, etc.
        • special processing for SME scores (e.g., good = +1, bad = -1, meh = 0; onetrue?)
        • partial credit for user % scores, order dependent
        • multi-objective optimization/scoring might give more weight to getting a onetrue result into the top 5 for one query than to improving the order of good results in the top 5 for any number of queries, for example.


  • Support for tagged queries
    • A mechanism/UI for tagging queries (possibly with a controlled vocabulary, so we don't have, for example, "ambig", "ambiguous", and "ambig query" all as tags).
      • Could be separate and/or built into the diff viewer for opportunistic tagging while reviewing
      • Since tagging is extra work that increases the quality of a corpus, we have to figure out how to keep corpora around for longer, and how to share them (including revision control).
    • Modify the comparison summary reports to show info by tags, including which tags had the most changes, and linking to examples by tag.
  • Diffs and Stats: if we ever work on changing the article snippets we show, we might want to highlight snippet differences, too.