A new page (shown in the top margin) has been added called Mr. EDR. It introduces and explains the blog’s new mascot, a robot, whose full name is Lexi EDR. The page was written by Lexi and so is a self-introduction. You may remember that he used to hangout at ITLex, but after the foundation closed, he joined the e-Discovery Team. He got his last name last month as part of the e-Discovery Team’s efforts at the 2015 TREC Total Recall track. The whole story is explained on the Mr. EDR Page.
This new blog Page is also the place you will find Lexi describe the Team’s participation in NIST’s 2015 TREC Total Recall Track. The work of the Team is still underway, so you will not find any detailed reports yet, but you will find a general introduction. More information will be made available by the end of the year, or early next year, when we publish our official Final Report.
We can definitely tell you now that the Team was challenged, which we like, and the humans involves all learned a great deal. So too did Mr. EDR. I would highly recommend that any professional legal search expert participate in TREC. Hopefully they will run the Recall Track again in 2016. It was been a very worthwhile and well run experience. It is also a safe place to run public experiments on search. As TREC explains:
The annual Text REtrieval Conference (TREC) is an event in which organizations with an interest in information retrieval research take part in a coordinated series of experiments using the same experimental data. The goal of the conference series is to create the infrastructure necessary for large-scale evaluation of research retrieval systems and thereby foster research into effective techniques for information access.
By design, TREC is explicitly not a venue for commercial product tests (i.e., benchmark comparisons). A valid, informative vendor test requires a level of control in task definition and system execution that is counter to the scientific research goals of TREC. Insofar as TREC participants do the same task, the results from different participating teams are comparable, but interpretation of what those results actually represent may vary widely. For example, commercial participants may submit results from research prototype systems rather than their production system, or participants may deliberately degrade one aspect of their system to focus on another aspect.
To preserve the desired, pre-competitive nature of the TREC conferences, TREC requires all participants to sign and abide by an agreement concerning the dissemination and publication of TREC evaluation results. The guidelines embodied in the agreement are meant to preclude the publication of incomplete or inaccurate information that could damage the reputation of the conference or its participants. In particular, the agreement prohibits any advertising based on TREC results and sharply curtails the use of TREC results in marketing literature.
NIST and the TREC program committee are strongly committed to the ethos of cooperation the guidelines are designed to engender, but cannot accept responsibility for performance claims made by participants in violation of the agreement. TREC reserves the right to prohibit violators from participating in future TREC conferences.
The e-Discovery Team, including Mr. EDR, will, of course, strictly abide by and follow these guidelines. We are confident that all other participants will too.