Borg Challenge: Part Three where I continue my search through round 16 of machine training

Ralph_frontThis is the third in a series of reports on a fifty-hour predictive coding experiment using the Borg approach to predictive coding. In this segment my video describes rounds three through sixteen of the training. For these videos to make sense you first need to read and watch Part One and Part Two of the Borg Challenge. If this still makes no sense, you could try reading a science fiction version of the battle between these two competing types of predictive coding review methods, Journey into the Borg Hive. And you thought all predictive coding software and review methods were the same? No, it is a little more complicated than that. For more help and information on Computer Assisted Review, see my CAR page on this blog.

The kind of repetitive review task method I am testing here, where you let the computer do most of the thinking and merely make yes-no decisions, can be tedious and difficult. As the project progressed I began to suspect that it was not only taking a toll on my mind and concentration, but also having physical effects. Some might say the changes are an improvement over my normal appearance, but my wife did not think so. Note the Borg appear to be somewhat like vampires and prefer the dark. Click on these two short video reports and see for yourself.


Stay tuned for Borg Challenge: Part Four where my search continues using a modified Enlightened Borg approach. For an explanation of these terms see Three-Cylinder Multimodal Approach To Predictive Coding.

10 Responses to Borg Challenge: Part Three where I continue my search through round 16 of machine training

  1. [...] novel. For these videos to make sense you first need to read and watch Part One, Part Two, and Part Three of the Borg Challenge. Even then, who knows? Kafkaesque videos providing predictive coding [...]

  2. [...] every respect, except for methodology. The experiment itself is described in Part One, Part Two, Part Three and Part Four of the Borg Challenge. The results reported in my videos below may surprise [...]

  3. ESC says:

    Hi Ralph,

    Indeed you’re looking a little peaked as you progress. Good method acting!

    I just wanted to clarify, an “iteration” consists of both the human coded training (batch of 200 dox) and a corresponding machine “Learning Session” initiated right after.

    Thanks for your sacrifice! The federation appreciates your service.

  4. ESC says:

    One more basic question:

    Paraphrasing here, but the 200 doc batch training outlays — given to you by the computer for you to definitively code — are those docs for which the computer needs your input? Are they “chosen” on the basis of their ambiguity to the computer/algorithm? Is this the Borg “thinking”/grasping?

  5. Ralph Losey says:

    Yes, about 80% are so chosen, the other approx. 20% are chosen at random. Thus it qualifies as an Enlightened Borg approach. Note is was also more autonomous than most Borg in that the victim – me – was still able to use free-will to run predictive coding searches. This is a subtle point I am not sure I made clear. So I suppose you could say it was an Enlightened Borg not yet totally assimilated, fighting to save its soul from total machine dominance, ie – from total automation. I could not bring myself to spend 50 hours in a total automated Borg system. I feel sorry for reviewers forced to endure that. Perhaps I should form a PTSD counseling group for them.

  6. ESC says:

    A post trauma review counseling sideline sounds like a great idea! Imagine how many poor souls out there thought they were going into law so they could be paid to think and then they get stuck on a PC project like the one you illustrate. Ugh!

    Yes, I think you identified the Enlightened Borg approach with these examples:

    1. User’s ability to perform “side searches” – checks on how many docs the algorithm had ranked / predicted relevant over a certain %
    “At this point I also ran a special search to see if any 50%+ probable to rank relevant; one new doc predicted relevant 75.7%…”

    2. Computer generates batches of 200 training docs for human to review – but the human can adjust this allotment

    3. Training doc batches selected on a 20% random basis, 80% internal evaluation (of algorithm granted)

    4. anything I’m missing?

    Thank you so much again!

    • Ralph Losey says:

      Not quite. Enlightened Borg is fully automated and uses both random and machine selected for the documents picked for human review. For the sake of my own sanity I modified the Enlightened Borg approach so that it was not fully automated. You correctly identified most of the various activities that I inserted that require some independent human judgment. So it is really a Quasi-Hybrid Enlightened Borg approach. The main additional search activities were the side searches using ranking. I did not do any other types of searches, such as keyword or similarity. So it was pure monomodal, just predictive coding type search. Thanks for the close study. Just curious whether you do review and if so, what types of methodology, software, and who you work for, etc.

  7. ESC says:

    Sorry about #3 you covered that. Oops.

    4. Quality control “allowed” (vs. Quality Assurance measures baked into program)

  8. [...] of the Borg Challenge that was previously reported in five installments: Part One, Two, Three, Four and [...]


Get every new post delivered to your Inbox.

Join 3,537 other followers