An On-the-Record Colloquy about Predictive Coding With Judge Peck

U.S. Magistrate Judge Andrew J. Peck

We all talk all the time about predictive coding, but it is not often that you get perspective on it direct from the battle-scarred trenches of high-stakes litigation. Over at Law Technology News, editor Sean Doherty reports on a recent hearing before U.S. Magistrate Judge Andrew J. Peck of the Southern District of New York in which Judge Peck ordered the parties to adopt a protocol for e-discovery that includes the use of predictive coding. It appears to be the first federal case to formally endorse the use of predictive coding, Doherty writes.

The case, Monique Da Silva Moore v. Publicis Groupe, is a class action alleging widespread discrimination against women employed by one of the world’s “big four” advertising conglomerates. In a sometimes contentious Feb. 8 teleconference with Judge Peck — of which LTN has published the transcript — the parties debate sanctions and various other e-discovery issues before getting down to the brass tacks of predictive coding. It was a hearing that started with Judge Peck showing little patience for the parties’ inability to cooperate. At one point, the exasperated judge declares to plaintiffs’ counsel:

Stop. Please. I take judicial notice of the fact that you don’t like the defendants. Stop whining and let’s talk substance. I don’t care how we got here and I’m not giving anyone money today. In the future not only will there be sanctions for whoever wins or loses these discovery disputes, — and so far you’re one for two, I think — there will be sanctions payable to the clerk of court for wasting my time because you can’t cooperate.

But as the hearing progressed, Judge Peck seemed to appreciate the fact that there were two e-discovery consultants on the call, Paul Neale, CEO of DOAR Litigation Consulting, for the plaintiffs, and David Baskin, a vice president at Recommind, for the defendants. The two consultants had participated in negotiations among the parties to draft a protocol for using predictive coding to identify documents relevant to the discrimination claim.

Still, the transcript is replete with examples of how convoluted the conversation can get when it comes to search parameters and the use of predictive coding. Consider the following exchange about search with regard to the emails of “comparators” — men who performed jobs comparable to those of the women plaintiffs.

THE COURT: This is a case where the plaintiffs worked at the company. What is it that you expect to see in the comparators’ email that is relevant? Describe the concepts to me. Frankly, I don’t disagree that whether they are comparators or not is a relevant issue, but I don’t see why, if you want to find out what their job duties were and these people have no stake in the case, you don’t just take their deposition.

MS. BAINS: We do want to take their depositions. To answer your question about the specific things we would be looking for, for example, one of the plaintiffs testified about her job duties, including client contact. We would look for client contact in the comparators.

THE COURT: That’s ridiculous. That means basically forget sophisticated searches, any email from one of these comparators to or from a client is relevant?

MS. BAINS: I mean on the substantive issues regarding contacts.

THE COURT: How do you train a computer for that? How do you do a key word on that? I’m having a very hard time seeing what it is you expect. You’ve got the plaintiffs’ emails. If you don’t have their emails, you have their memory of them. If comparator whoever, Kelly Dencker, I don’t know if that is a he Kelly or a she Kelly, but if Kelly wrote to a client and said, I’d like to meet with you next week to discuss the following presentation, that’s what you’re looking for?

MS. BAINS: That would be part of it.

THE COURT: What else? You keep giving me this is part of it. If you want me to order this done, you’ve got to tell me how it is that it could be done in a reasonable way.

MS. BAINS: I think we could treat the comparators as a separate search.

THE COURT: Then what is that search going to be? Also, by the way, we’ve gone from throw the comparators into the bundle but do a little key word screening first to reduce volume to now we are at the let’s do the comparators separate, and I’m still not hearing how you’re going to search through their emails separately.

MS. BAINS: One of our allegations is that they were given opportunities, including job assignments, etc., that plaintiffs weren’t.

THE COURT: That is basically every substantive email, every business email they have. All right, comparators are out at this time without prejudice to you coming up with some scientific way to get at this. Otherwise, take the deposition and go from there.

From there, the transcript evolves into a lengthy colloquy about the parties’ draft protocol for predictive coding. Whereas Judge Peck seemed impatient earlier in the hearing, here he engages in an in-depth back-and-forth with the two search consultants and counsel, trying to fully understand each side’s relative positions. Here is an excerpt:

MR. BASKIN: Judge, from what I understand, the request is not to do the random sample iterations, finish the iterations. I’m still not understanding.

THE COURT: What they are saying is each time you run it, whether it’s 7 or less, and it may be two different things to satisfy yourself on the defense side and something else to satisfy the plaintiffs, but whether you do the 500 best documents or not, the 500 and possibly more, Mr. Neale was suggesting that on each iteration there is a random sample drawn and the computer will have coded some of those as relevant and some of them as not relevant; and if it is miscoding the documents that are not relevant, then there’s a problem.

MR. BASKIN: Let me clarify. The computer doesn’t code documents. The computer suggests documents that are potentially relevant or similar.

THE COURT: Same thing.

MR. BASKIN: What happens is during the seven iterations, all the defense attorneys are going to do is refine the documents that they are looking at. After the seven iterations, what you are getting is a sum of it all. Then you are performing a random sample. Doing random samples in between makes no sense. The actual sum of the seven iterations will just be the sum of that. You are refining and learning.

THE COURT: What Mr. Neale is saying is that you might not have to do it seven times and that the sooner you find out how well the seed set or the training has worked, the better.

MR. BASKIN: What’s going to happen, at least from what I understand the request to be, is that you do one iteration, which is 500, then you do 2399 samples, then you do another iteration, do another 2399. I think they are looking for the 7 times 2400 plus the 500 each. We are looking at 21,000.

MR. NEALE: That’s not what we are suggesting. We are actually suggesting that each iteration be one sample randomly selected of 2399, indicating which of those the system would have flagged as relevant so we know the difference in the way in which it is being categorized.

MR. ANDERS: I would think, too, we are now just completely missing the power of the system. What we were going to review at each iteration are the different concept groups where the computer is taking not only documents it thinks are relevant but it has clustered them together and we can now focus on what is relevant to this case. By reverting back to a random sample after each iteration, we are losing out on all the ranking and all the other functionality of this system. It doesn’t seem to make sense to me.

THE COURT: I’m not sure I understand the seven iterations. As I understand computer-assisted review, you want to train the system and stabilize it.

MR. BASKIN: If I may. What happens when you seed the particular category is you take documents, you review them. The relevant documents are now teaching the system that these are good documents.

THE COURT: Right.

MR. BASKIN: It also takes the irrelevant documents and says these are not good documents. It continues to add more relevant documents and less irrelevant documents into the iterations. The seven iterations will then refine that set and continue to add the responsive documents to each category. At the end of that, after seven iterations, you will have not only positive responsive documents, also the nonresponsive documents, but the last set of computer-suggested documents the system suggests. From that point the defense is saying we can then verify with a 95 percent plus or minus 2 of 2399 to see if there is anything else that the system did not find.

THE COURT: Let me make sure I understand the iterations then. Is the idea that you are looking at different things in each iteration?

MR. BASKIN: Correct. It’s learning from the input by the attorneys. That’s the difference. That’s why the random sample makes no sense.

MR. NEALE: I don’t doubt that that is how Recommind proposes to do it. Other systems are, however, –

THE COURT: We are stuck with their black box.

MR. NEALE: — fine to do it.

MR. BASKIN: It’s not a black box. We actually show everything that we are doing.

THE COURT: I’m using “black box” in the legal tech way of talking. Let’s try it this way, then we’ll see where it goes. To the extent there is a difference between plaintiffs’ expert and the defendants’ on what to do — and to the extent I’m coming down on your side now, on the defense side, that doesn’t give you a free pass — random sample or supplemented random sample, once you tell me and them the system is trained, it’s in great shape, and there are not going to be very many documents, there will be some but there are not going to be many, coded as irrelevant that really are relevant, and certainly there are not going to be any documents coded as irrelevant that are smoking guns or game changers, if it turns out that that is proved wrong, then you may at great expense have to redo everything and do it more like the way Mr. Neale wants to do it or whatever.

For the moment, since I think I understand the training process, and going random is not necessarily going to help at that stage, and since Mr. Neale and the lawyers for the plaintiffs are going to be involved with you at all of these stages, let’s see how it develops.

Eventually, the parties work through to an agreement on the predictive-coding protocol. But before the discussion is concluded, Judge Peck gets in what he describes as a “tweak” about the Recommind predictive coding patent that caused such a stir when it was announced last year. Here is how it went:

MR. BASKIN: The system could return 300 documents in the first iteration. At that point you can’t do 2399. I’m actually impartial. I designed the system. I work for the company, and I’m not getting paid for this. I just wanted to let you know that 7 iterations from a quality perspective is better to the plaintiff.

MR. NEALE: It is also inconsistent with your patent, which suggests that you do the iterations until the system tells you it’s got it right. Speaking to the limit on that without having done it is not consistent with your own patent and with what is generally accepted as best practice.

THE COURT: They also claim to have a patent on the word “predictive coding” or a trademark or a copyright. We know where that went in the industry. But I’m just tweaking you.

Near the end of the transcript, as the hearing is wrapping up, Judge Peck states, “This may be for the benefit of the greater bar, but I may wind up issuing an opinion on some of what we did today.” If he does issue an opinion regarding predictive coding, it would no doubt be a milestone in the industry’s adoption of the technique.

Like this Article? Subscribe to Our Feed!

3 thoughts on “An On-the-Record Colloquy about Predictive Coding With Judge Peck

  1. Pingback: In A Milestone for Predictive Coding, Judge Peck Says, ‘Go Ahead, Dive In!’

  2. Pingback: Should the ‘Daubert’ Standard Apply to Predictive Coding? We May Know Soon

  3. Pingback: Technology-Assisted Review: From Expert Mentions to Mainstream Coverage | @ComplexD

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>