A Discussion About Dynamo Holdings: Is 43% Recall Enough?

blog_john_and_tomIn September 2014, Judge Ronald L. Buch became the first to sanction the use of technology assisted review (aka predictive coding) in the U.S. Tax Court. See Dynamo Holdings Limited Partnership v. Commissioner of Internal Revenue, 143 T.C. No. 9. We mentioned it here.

This summer, Judge Buch issued a follow-on order addressing the IRS commissioner’s objections to the outcome of the TAR process, which we chronicled here. In that opinion, he affirmed the petitioner’s TAR process and rejected the commissioner’s challenge that the production was not adequate. In doing so, the judge debunked what he called the two myths of review, namely that human review is the “gold standard” or that any discovery response is or can be perfect. Continue reading

Ask Catalyst: How Can You Validate Without A Control Set?

[This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

Ask_Catalyst_TC_Thomas_GricksWe received this question:

I hear you don’t use a control set in your TAR 2.0 processes? If so, how can you validate your results?

Today’s question is answered by Thomas Gricks, managing director of professional services. Continue reading

In Affirming Use of TAR, Tax Court Takes Down ‘Two Myths’ of Review

TC_CourtTwo years ago, U.S. Tax Court Judge Ronald L. Buch broke new ground when he became the first judge to formally sanction the use of technology assisted review in the Tax Court. In Dynamo Holdings Limited Partnership v. Commissioner of Internal Revenue, Judge Buch said that TAR “is an expedited and efficient form of computer-assisted review that allows parties in litigation to avoid the time and costs associated with the traditional, manual review of large volumes of documents.”

After receiving Judge Buch’s permission, the petitioners went on to use a TAR process and to use the results of that process to respond to the IRS commissioner’s discovery requests. Earlier this year, believing the response to be incomplete, the commissioner served a new set of discovery requests. When petitioners objected, the commissioner filed a motion to compel, thus bringing the TAR issue before Judge Buch a second time. Continue reading

Judge Peck Declines to Force the Use of TAR

TC_Bob_AmbrogiBob Ambrogi, who serves as director of communications here at Catalyst, posted a detailed analysis yesterday at Bloomberg Law’s Big Law Business of Magistrate Judge Andrew J. Peck’s latest decision involving technology assisted review, Hyles v. New York City. It’s well worth a look.

Hyles is an employment case where the plaintiff wanted the court to force New York City to use TAR rather than its proposed search terms. Even though Judge Peck emphasized “that in general, TAR is cheaper, more efficient and superior to keyword searching,” he nevertheless declined to force the defendants to use TAR, finding that it hasn’t yet displaced other tools to the point where using something else is unreasonable. Further, the Sedona Principles state:

Responding parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.

Continue reading

Ask Catalyst: Why Can’t You Tell Me Exactly How Much TAR Will Save Me?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:Ask_Catalyst_MN_Why_Can't_You_Tell_Me_Exactly_How_Much_TAR_Will_Save_Me-04

Why can’t you tell me exactly how much I’ll save on my upcoming review project by using technology assisted review?

Today’s question is answered by Mark Noel, managing director of professional services.  Continue reading

Ask Catalyst: What Is ‘Supervised Machine Learning’?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]
Ask_Catalyst_JP_What_is_Supervised_Machine_Learning-03

We received this question:

What is supervised machine learning?

Today’s question is answered by Dr. Jeremy Pickens, senior applied research scientist. Continue reading

Video: The Three Types of E-Discovery Search Tasks and What They Mean for Your Workflow

Three Categories of ReviewNot all search tasks are created equal. Sometimes we need to do a reasonable and cost-effective job of finding the majority of relevant documents, sometimes we need to be 100 percent certain that we’ve found every last bit of sensitive data, and sometimes we just need the best examples of certain topics to tell us what’s happening or to use as evidence. This video explains these three broad categories of search tasks; the differences in recall, precision and relevance objectives for each; and the implications for choosing tools and workflows.
Continue reading

Ask Catalyst: If We Are Halfway Through Review, Can We Still Use TAR?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:

We are halfway through a document review. It is taking us longer than we anticipated and we are running short on time. We are considering using technology assisted review to help expedite the remainder of the project. Our question is whether it would make sense to start using TAR at this stage.

Today’s question is answered by Bob Ambrogi, director of communications. Continue reading

Ask Catalyst: How Does Insight Predict Handle ‘Bad’ Decisions By Reviewers?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:

Ask_Catalyst_MN_How_Does_Insight_Predict_Handle_Bad_Decisions_By_Reviewers-04I understand that the QC feature of Insight Predict shows outliers between human decisions versus what Predict believes should be the result. But what if the parties who performed the original review that Predict is using to make judgments were making “bad” decisions? Would the system just use the bad training docs and base decisions just upon those docs?

Similarly, what about the case where half the team is making good decisions and half the team is making bad decisions? Can Insight learn effectively when being fed disparate results on very similar documents?

Can you eliminate the judgments of reviewers if you find they were making poor decisions to keep the system from “learning” bad things and thus making judgments based on the human errors?

Today’s question is answered by Mark Noel, managing director of professional services.  Continue reading

Video: Understanding Yield Curves in Technology Assisted Review

blog_contextual_diversity_videoIn information retrieval science and e-discovery, yield curves (also called gain curves) are graphic visualizations of how quickly a review finds relevant documents or how well a technology assisted review tool has ranked all your documents.

This video shows you how they work and how to read them to measure and validate the results of your document review. Continue reading