Author Archives: John Tredennick

John Tredennick

About John Tredennick

A nationally known trial lawyer and longtime litigation partner at Holland & Hart, John founded Catalyst in 2000 and is responsible for its overall direction, voice and vision.Well before founding Catalyst, John was a pioneer in the field of legal technology. He was editor-in-chief of the multi-author, two-book series, Winning With Computers: Trial Practice in the Twenty-First Century (ABA Press 1990, 1991). Both were ABA best sellers focusing on using computers in litigation technology. At the same time, he wrote, How to Prepare for Take and Use a Deposition at Trial (James Publishing 1990), which he and his co-author continued to supplement for several years. He also wrote, Lawyer’s Guide to Spreadsheets (Glasser Publishing 2000), and, Lawyer’s Guide to Microsoft Excel 2007 (ABA Press 2009).John has been widely honored for his achievements. In 2013, he was named by the American Lawyer as one of the top six “E-Discovery Trailblazers” in their special issue on the “Top Fifty Big Law Innovators” in the past fifty years. In 2012, he was named to the FastCase 50, which recognizes the smartest, most courageous innovators, techies, visionaries and leaders in the law. London’s CityTech magazine named him one of the “Top 100 Global Technology Leaders.” In 2009, he was named the Ernst & Young Entrepreneur of the Year for Technology in the Rocky Mountain Region. Also in 2009, he was named the Top Technology Entrepreneur by the Colorado Software and Internet Association.John is the former chair of the ABA’s Law Practice Management Section. For many years, he was editor-in-chief of the ABA’s Law Practice Management magazine, a monthly publication focusing on legal technology and law office management. More recently, he founded and edited Law Practice Today, a monthly ABA webzine that focuses on legal technology and management. Over two decades, John has written scores of articles on legal technology and spoken on legal technology to audiences on four of the five continents. In his spare time, you will find him competing on the national equestrian show jumping circuit.

Ask Catalyst: Is Recall A Fair Measure Of The Validity Of A Production Response?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

Ask_Catalyst_TC_John_TredennickThis week’s question:

Is recall a fair measure of the validity of a production response?

Today’s question is answered by John Tredennick, founder and CEO. Continue reading

A Discussion About Dynamo Holdings: Is 43% Recall Enough?

blog_john_and_tomIn September 2014, Judge Ronald L. Buch became the first to sanction the use of technology assisted review (aka predictive coding) in the U.S. Tax Court. See Dynamo Holdings Limited Partnership v. Commissioner of Internal Revenue, 143 T.C. No. 9. We mentioned it here.

This summer, Judge Buch issued a follow-on order addressing the IRS commissioner’s objections to the outcome of the TAR process, which we chronicled here. In that opinion, he affirmed the petitioner’s TAR process and rejected the commissioner’s challenge that the production was not adequate. In doing so, the judge debunked what he called the two myths of review, namely that human review is the “gold standard” or that any discovery response is or can be perfect. Continue reading

Ask Catalyst: How Much Storage Do I Need for 600,000 Scanned Documents? (Or, How Many TIFFs in a GB?)

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

Ask_Catalyst_JT_How_Much_Storage_Needed-05We received this question:

How much data storage do I need to store 600,000 scanned documents?

Today’s question is answered by John Tredennick, founder and CEO.
Continue reading

How Many Documents in a Gigabyte: 2016 Edition

How_Many_DocsReaders of our blog will know that I have a continuing interest in answering the perennial e-discovery question: “How many native documents are in a gigabyte?” I started thinking about this in 2011 and published my first article on the subject based on analysis of 18 million files. Challenging industry assumptions, which ran from 5,000 to as many as 15,000, I concluded that the average across all files—based on that sample—was closer to 2,500. Continue reading

Gigabyte Expansion in E-Discovery Hosting: You Get What You Pay for; You Pay for What You Get

Catalyst_Blog_Gigabyte_ExpansionAn old friend called me recently to talk about a beef he had with his e-discovery provider. “What’s up?” I asked when I realized who it was. He told me he thought he had done everything right in setting up his last e-discovery project. He sent out an RFP to several vendors, asked all the right questions and then picked the bidder with the lowest per-gigabyte price to host the documents. Everything seemed like it was on track.

“So what’s wrong with that,” I asked. “You went for the low bidder and locked them in with an ironclad contract. Getting hosting for that kind of per-gigabyte price seems like a steal.”

My friend sighed in response. “What happened was that I didn’t read the fine print.” Continue reading

In First for UK, High Court Master Approves Use of TAR

Taking his lead from the seminal U.S. case, Da Silva Moore v. Publicis Groupe, a master of Britain’s High Court of Justice has approved the use of technology assisted review, becoming the first case to do so in the United Kingdom and only the second case outside the U.S. to approve TAR.

In a written decision issued Feb. 16, 2016, in the case Pyrrho Investments Ltd. v. MWB Property Ltd., Master Matthews, who is similar in responsibility to a magistrate judge in the U.S. federal court system – provided his reasons for his approval of the parties’ request to use TAR in a case involving some 3.1 million electronic documents. Continue reading

Moving Beyond the EDRM: Creating a Mental Model of the E-Discovery Process

ediscovery-mental-modelIn the user interface (UI) and user experience (UX) world, one of the ways people design successful software is through the creation of a “mental model” of the underlying processes. Mental models have been around since the 1940s and used for different processes but the concept caught hold in software because it gave designers a framework to understand user needs and the problems they were trying to solve.

According to one of the pioneers of Internet usability, Jacob Neilson, “mental models are one of the most important concepts in human-computer interaction.” We use them to inform our software design and we wanted to share one that we created to model the e-discovery process. Continue reading

Assessing Adequacy of Production in a TAR 1.0 Review: Further Lessons from Rio Tinto and a Chance to Do a Little Fishing

blog_fishLast March, we wrote about U.S. Magistrate-Judge Andrew J. Peck’s decision in Rio Tinto PLC v. Vale SA (S.D. N.Y. March 3, 2015). The decision focused on the types of disputes over process that can arise when parties negotiate a TAR 1.0 protocol. In that post, we noted with approval Judge Peck’s acknowledgment that one common bone of contention in TAR 1.0 negotiations ⎯ transparency around training and the seed set ⎯ becomes less of an issue when the TAR methodology uses continuous active learning.

If the TAR methodology uses ‘continuous active learning’ (CAL) (as opposed to simple passive learning (SPL) or simple active learning (SAL)), the contents of the seed set is much less significant.

After issuing his opinion, and doubtless facing continuing squabbles among the parties, Judge Peck appointed Maura Grossman to serve as a special master to resolve discovery disputes relating to the parties’ use of TAR. Several months later, she issued a “Stipulation and Order re: Revised Validation and Audit Protocols for the Use of Predictive Coding in Discovery,” which is the subject of this blog post. Continue reading

An Open Look at Keyword Search vs. Predictive Analytics

An_Open_Look_at_Keyword_SearchCan keyword search be as or more effective than technology assisted review at finding relevant documents?

A client recently asked me this question and it is one I frequently hear from lawyers. The issue underlying the question is whether a TAR platform such as our Insight Predict is worth the fee we charge for it.

The question is a fair one and it can apply to a range of cases. The short answer, drawing on my 20-plus years of experience as a lawyer, is unequivocally, “It depends.” Continue reading

Revisiting the Blair and Maron Study: Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System

Full-Text_Document-Retrieval_SystemWhether at Sedona, Georgetown, Legaltech or any other of the many discovery conferences one might attend, a common debate centers on the efficacy of keyword search. “Keyword search is dead,” some argue, touting the effectiveness of the newer predictive analytics engines. “Long live keyword search,” comes back in return from lawyers who have relied on it for decades both to find legal precedent and, more recently, relevant documents for their cases.

Often, the critics of keyword searching cite the 1985 Blair and Maron study for the Association of Computing Machinery that suggested that full-text retrieval systems brought back only 20 percent of the relevant documents. That assertion is true but I wonder how many of the debaters have ever read the study itself. My guess is not many, including me. So I decided to give it a read. Continue reading