There has been a bit of talk lately in the e-discovery echo chamber about fixed-price models for processing, hosting, review and productions. The purported goal of this discussion was to create a stir and drum up business. Yet conspicuously absent from this entire discussion was talk of total cost, aka value. I am the research scientist at Catalyst, so typically I do not get involved in discussions like this. However, as there still seems to be a great deal of confusion over value, I felt the need to help sort all this out.
First, a bit of my background. I have spent the last 18 years of my professional life developing and applying algorithms to the task of finding relevant information. Currently, I am the senior applied research scientist at Catalyst. I obtained my Ph.D. in computer science with a focus on information retrieval (search engines) from the Center for Intelligent Information Retrieval (CIIR) at UMass Amherst in 2004. I did a postdoc at King’s College University of London and then spent five years at the Fuji Xerox research lab in Palo Alto (FXPAL) before joining Catalyst in 2010. Continue reading
The more documents a case involves, the more difficult the task for litigation teams to review and make sense of them. These days, even routine cases can involve many thousands of documents, while more complex cases can involve many millions. For litigators, these mountains of documents present a challenge: How to uncover the stories the documents contain so that you can prepare your cases for discovery and trial — and do so within the limits of available time and budgets.
This Thursday, Aug. 27, a free webinar will demonstrate how litigators can address these challenges using sophisticated analytics tools. The webinar, “Litigation Analytics: How to Find Information Critical to Your Case,” will show how analytics tools can turn mountains of documents into molehills, enabling litigators to quickly and affordably zero in on what they need to know. Continue reading
Recently, Bob Ambrogi, our director of communications, published a post called “Our 10 Most Popular Blog Posts of 2015 (So Far).” To my surprise, one of my 2011 posts topped the list: “Shedding Light on an E-Discovery Mystery: How Many Documents in a Gigabyte?” Another on the same topic ranked fourth: “How Many Documents in a Gigabyte? An Updated Answer to that Vexing Question.”
Hmmm. Clearly, a lot of us are interested in knowing the answer to this question. I have received a number of comments on both posts (both in writing and in conversation), which always makes the writing worthwhile. The RAND people told me they also found my findings of interest when they were putting together their study on e-discovery costs. Continue reading
When a large banking institution filed suit alleging accounting fraud caused it to lose millions from a bad loan, the litigation was sure to turn nasty. Soon enough, the bank faced a major production request from defendants. Even after culling, it had 2.1 million documents to review, with limited time and budget to waste.
The bank turned to Insight Predict, Catalyst’s unique Technology Assisted Review platform. Its plan was to employ Predict’s Continuous Active Learning protocol and see if TAR could be effective in further reducing the population. The result: The bank was able to achieve 98% recall after reviewing just 6.4% of the population. Continue reading
A key debate in the battle between TAR 1.0 (one-time training) and TAR 2.0 (continuous active learning) is whether you need a “subject matter expert” (SME) to do the training. With first-generation TAR engines, this was considered a given. Training had to be done by an SME, which many interpreted as a senior lawyer intimately familiar with the underlying case. Indeed, the big question in the TAR 1.0 world was whether you could use several SMEs to spread the training load and get the work done more quickly.
SME training presented practical problems for TAR 1.0 users—primarily because the SME had to look at a lot of documents before review could begin. You started with a “control” set, often 500 documents or more, to be used as a reference for training. Then, the SME needed to review thousands of additional documents to train the system. After that, the SME had to review and tag another 500 documents to document effectiveness of the training. All told, the SME could expect to to look at and judge 3,000 to 5,000 or more documents before the review could start. Continue reading
Last February, we reported here on a proposed ethics opinion from the State Bar of California that would require lawyers who represent clients in litigation either to be competent in e-discovery or associate with others who are competent. At that point, the bar was accepting public comments on the proposed opinion in advance of issuing a final opinion.
Now, that opinion has been finalized and was issued on June 30 as Formal Opinion No. 2015-193. The final opinion largely mirrors the proposed opinion, with only minimal changes. As before, the opinion says that attorneys have a duty to maintain the skills necessary to integrate legal rules and procedures with “ever-changing technology.” In support of that statement, it cites the American Bar Association’s 2012 amendment to the Model Rules that discussed the duty of lawyers to keep abreast of changes in the law, “including the benefits and risks associated with relevant technology.” Continue reading
E-discovery review has come a long way in a short time. Not long ago, manual, linear review was the norm. Then came keyword search, which helped increase efficiency but was imperfect in its results. Technology-assisted review was a great leap forward, but early TAR 1.0 versions were rigid and slow. Only with the arrival of TAR 2.0 and Continuous Active Learning did TAR finally save the day for e-discovery.
The brief history of how TAR evolved is depicted in a new Catalyst infographic, A TAR is Born: The Making of a Superstar. See how e-discovery review matured from a demanding infant to a Ph.D. in savings. After you check out the infographic, read much more about TAR in Catalyst’s free e-book, TAR for Smart People.
View Infographic >
In his 2015 opinion in Rio Tinto PLC v. Vale SA, Magistrate Judge Andrew Peck extolled the benefits of technology assisted review using Continuous Active Learning. In particular, he noted that CAL reduces or even eliminates the need for the rigid seed set required by older TAR methods.
CAL’s flexibility on seed sets was illustrated in a case where a large banking institution alleged it lost millions due to a borrower’s accounting fraud. In response to the borrower’s production request, the bank faced review of 2.1 million documents, even after culling. With neither the time nor budget to review them all, the bank turned to Catalyst’s Insight Predict, the first commercial TAR engine to use CAL. Predict cut the review by 94%.
Read the case study to see how it was done >>
Erin Harrison, editor-in-chief, Legaltech News; John Tredennick, founder and CEO, Catalyst; Jeremy Pickens, senior applied research scientist, Catalyst; and Tom Larranaga, publisher, Legaltech News, at the awards ceremony.
We are thrilled to announce that at a ceremony last night in San Francisco, Legaltech News named Insight Predict—Catalyst’s next-generation technology assisted review (TAR) product—as the winner in the New Product of the Year category of the Legaltech News Innovation Awards.
The award was announced at a special event at the City Club of San Francisco at the close of Legaltech West Coast.
The LTN Innovation Awards recognize the best legal technology leaders, products and projects in the legal community. Award winners were chosen by a panel of judges consisting of members of Legaltech News’ editorial staff. In addition to recognizing law department and law firm leaders, LTN presented awards to 20 vendors and services providers across a variety of categories.
With the February 2015 release of Catalyst’s Active Review functionality within Insight Predict, Catalyst became the first to integrate continuous active learning (CAL) technology—the next generation of TAR—directly into the review process. Active Review eliminates the traditional separation between linear review and TAR by combining them in a single, integrated workflow. Continue reading
In the annals of case law about e-discovery and technology assisted review (TAR), Malone v. Kantner Ingredients will be only a footnote. In fact, were it not for a footnote, the case would barely warrant mention here.
This blog has chronicled the increasing judicial acceptance of TAR, starting with U.S. Magistrate Judge Andrew J. Peck’s seminal 2012 opinion in Da Silva Moore v. Publicis Groupe, which was the first to approve TAR, and continuing through to Judge Peck’s recent opinion in Rio Tinto PLC v. Vale SA, which declared, “the case law has developed to the point that it is now black letter law that where the producing party wants to utilize TAR for document review, courts will permit it.” Continue reading