Continuing on the subject of review, the largest cost driver within the EDRM, it must be reiterated that reducing review costs doesn’t stop at the beginning of the project. To consistently reduce review costs, hidden burdens such as rework and throughput must be addressed throughout your review.
So, returning to this always timely subject, let’s revisit that all-important working question of: How do our choices during a review affect our total cost?
Thus far, we’ve analyzed the hidden cost drivers of review and successful project ramp up in reducing total costs of review (TCR). Today, let’s see how we can further streamline our review and reduce our total costs through reviewer optimization and coding consistencies.
Establishing the Numbers
Let’s recap the assumptions we will continue to work off of. In this scenario, we have 100,000 documents that need to be reviewed, with 15,000 documents needing to be re-reviewed (reworked) at a review throughput rate of 40 docs/hour, at a $42/hour rate. With these rework and throughput rates accounted for, this results in 115,000 effective documents reviewed.

In our previous article, How to Effectively Run a Review: Reduce Overall Costs by Up to 30% with Successful Project Ramp Up & Management, we took this same scenario and looked at how optimizing your review team from the start and effectively managing your team throughout the project can have significant cost savings. We found that, through this, we were able to produce significant cost savings – upwards of 30 percent.
In today’s exercise, the last question we’ll visit here is: What technology optimizations can you make throughout the review project that will increase cost savings?
Fine Tuning the Review Process
In our previous Groundhog Day webinar, Senior Project Advisor Robin LeDonne discussed enhancements that can be made to increase the speed of any review project, “You have your review team, but you want to see what you can do to get the review moving quicker” Robin stated. Tech strategies like persistent highlighting – not just of privilege documents that hit on a privilege term, but also those that hit on one of the agreed upon terms, can help the review team identify the terms quickly. Rather than just batching out documents for a review based on hits, it’s best to think about how to batch out more strategically, such as:
- Keeping contextually similar documents batched together
- Batching by chronology and/or custodian
If the documents can be reviewed in this way, it turns the review into more of a story which will help the team not only with reviewing the docs but in creating a strategy for how the overall case can be handled or tried.
It is also important to think through the coding panel. We have found the best results are to keep it as simple as possible, with well-defined coding descriptions. It is ideal, if possible, to leave the hot document determination to the subject matter experts and then look for similar documents, “I’ve been on projects where the team says when you run across a hot document, go ahead and tag it, but what I’ve seen happen is one of two things: every document is suddenly tagged ‘hot’ or ‘key’ or nothing is tagged,” Robin explained.
Can These Optimizations Be Backed Up?
As you can see, by doing something as simple as batching strategically, you can increase the number of documents reviewed by 30%, lower the quality control by 3% and in this example, lower the cost of review by $30k.

How Can We Keep Coding Consistent?
The last cost driver in review deals with coding consistency. This refers to the ways we can keep coding consistent and thus, reduce how many documents that may need to be re-reviewed because of coding inconsistencies.
The structure of how the coding panel is set up can determine whether it is streamlined and if it flows based on what reviewers should be searching for within the document. An example would be to have “the must have on every reviewed document” tag near the top of the coding panel, “I personally like using nested choices and event handlers, such as ensuring a choice is made by a reviewer before moving on to the next document,” Robin stated.
Dashboards are also ideal to catch issues before they become a significant QC problem. Dashboards track real time review progress and can help in finding any reviewers that may need additional coaching on the material. They can also let us know more about the data we are reviewing, such as whether the data collected is behaving as anticipated – i.e., if it is relevant and if it pertains to the anticipated issue. By catching issues in real time, this allows the team more flexibility to pivot and hone in on what matters most.
“I am also a huge advocate of having an active learning project that runs in the background as a QC step whenever possible. This is especially helpful when there is a subject matter expert that can review the seed documents, effectively teaching the computer prior to the review teams’ review,” said Robin.
Custom indexing in Relativity also allows each workspace to be tailored for the needs of the case, all the way down to what will be searched and which data will be ignored. By tailoring the search index to include a word or character, you will be able to get rid of false hits on terms, thus lowering the total documents to be reviewed.
What Do the Numbers Say?
By leveraging these techniques, such as streamlining the coding panel, utilizing dashboards, and engaging active learning, you can capture potential issues and inconsistencies in real time before they spiral into a huge problem. In this example, although the number of documents reviewed did not change, the percentage of rework dropped by almost 10%. This also lowered the total cost of review by almost $10K.

How You Can Apply This
Through making strategic, incremental changes to your reviewer batching strategies and coding consistencies, you can see how these small changes can affect the quality and costs of your review. While it is important to plan ahead and create an effective review team, using these tech strategies can further amplify your review by boosting reviewer efficiency, leading to less errors and larger cost savings.
Stay tuned for next month as we continue this exercise and put all of these techniques together to see how we can reduce our total review costs!
Be Sure to Follow Us for the Latest Content and Subscribe For the Latest Acorn Insights!
About Acorn
Acorn is a legal data consulting firm that specializes in AI and Advanced Analytics for litigation applications, while providing rigorous customer service to the eDiscovery industry. Acorn primarily works with large regional, midsize national and boutique litigation firms. Acorn provides a high-touch, customized litigation support services with a heavy emphasis on seamless communications. For more information, please visit www.acornls.com.