Recently, I was invited to speak at ILTACON2019 on the topic with four other industry insiders. I was surprised at the number of folks in the audience who had first-hand experience with TAR. About 2/3 of the audience had used predictive coding on at least one case, although a smaller number – only around 15% — had used it on multiple cases.
In an industry rooted in pragmatism and practice, the panel talked extensively about the industry shift from TAR 1.0 and TAR 2.0 ( as covered in law.com article ). The panelists were super knowledgeable, and we were lucky to have a very engaged audience. A number of thought-provoking questions were asked, which reflects an increasingly sophisticated audience.
Here are the two big ideas that caught my attention:
CostShifting/Sharing: Facilitator of TAR Adoption and Discovery CooperationThe 2015 Federal Rules amendments added subsection (c)(1)(B) to Rule 26 to expressly address cost-shifting in discovery. Courts can impose costs on the receiving party for “non-core discovery” activities or for a failure to make a “threshold showing of merit.” ( as covered in a Legal Backgrounder article hosted on Pepper Hamilton’s website span> )
The central challenges in industry adoption of TAR in a “not every document gets an eyeball” application are questions of disclosure requirements, cooperation with the opposing side and judicial discomfort in adjudicating disputes over questions of TAR. Discovery cost-shifting means both opposing parties have skin in the game — to the loser goes the Discovery bills. If eDiscovery cost-shifting becomes more prevalent in the industry, it will create more of an economic incentive for both sides to cooperate in leveraging technology for review efficiency. It seems like a win-win-win.
Cooperation between Plaintiff and Defendant is an area Chad Roberts of eDiscovery CoCounsel has spoken about on a national stage at RelativityFest – and I want his take on this matter, especially as relates to asymmetrical litigation.
The Next Big Frontier: TAR AI Models and Document-Set Mega-TrendsFor a few minutes, the panelists were allowed to put on our Futurist Hats and asked to speculate about where the technology may evolve. Two actionable ideas were surfaced.
First, creating AI models from previous matters to leverage that expertise on future matters. This is a natural extension of the idea behind predictive coding. If you can use a baseline of review work-product to predict the coding outcome of the remaining documents in a case, then you should be able to use a baseline of work-product from a previous, similar case with similar issues or similar clients, and use that to predict the coding outcome of documents in the new case. Privilege review seems a very natural starting point for this advancement in eDiscovery.
Second, the trends across the entire document set will become evidence as much as the documents themselves. For example, if a salesperson is conspiring with a competitor to transition their client base and circumvent their non-solicit, they might be smart enough to not leave a paper trail. But, you can probably see other trends that point you to what’s happening. Maybe their communications with a key client from their company email drop relative to historical values. Maybe their after-hours communications to non-company email addresses are higher relative to historical values. Maybe they take longer to reply to their supervisor’s calls than normal. Looking at any documents individually won’t tell you that story – but seeing the whole trend will.
In my view, Jay Leib and NexLP are at the forefront of innovation in the industry on both of these topics. They have been developing technologies like these TAR 5.0 Imaginings for a couple years. With their product, we already have the capability to use AI models across cases — it is particularly valuable for employment matters. And we are already aggregating data trends to find the hidden stories – a particularly compelling example is when StoryEngine was used for document-universe trends in a price-fixing matter.
ConclusionI really enjoyed the opportunity to gather and discuss predictive coding in the industry. We are seeing adoption of TAR increase and people hone their skills. I am excited to be a part of a dynamic and engaged community. And I look forward to having the opportunity to discuss how predictive coding can be used by all skill levels in all different types of matters. Erin Tomine at Conduent summarized the panel very nicely, “The human aspect is still so important” in determining the appropriate TAR approach, in clarifying the objectives of TAR, and in managing communications across the team. “Even as we embrace technology and AI, it can’t do everything for us.”
Be Sure to Follow Me for the Latest Content and Subscribe For the Latest Acorn Insights!