“Based on my encounters with Relativity Active Learning, I started to think active learning was a very blunt and obtuse tool. Our database contained a lot of non-responsive documents that used the same words as the responsive communications. Half the documents in the database were irrelevant, but we had no easy way to weed them out. That actually has been one of the biggest difficulties we had getting a handle on this database.”Over the years I have worked on more cases than I can count, and I still always appreciate the moments when I’m able to translate my experience to solve someone’s issue. Personally, I am an analytics guy through and through, from how I manage projects to how I operate in my own day to day life. Because of this, I try to educate people on how they can apply analytics and advanced technologies into their practices. A case I worked on recently comes to mind of where I wasn’t able to solve all the issues, but was able to offer improvements, including some notable improvements in select areas that could be applied to a wide range of matters.
I was working with a client that was very knowledgeable on active learning, in a dataset that is every TAR guy / gal’s worst nightmare. They came to me because they needed a Hail Mary for the case. The data was produced inconsistently, some had metadata, some didn’t, some in pdf format, you name it, they had it. The client wanted to evaluate whether a combination of my expertise and StoryEngine’s advanced technology might work better than their current methodology with Relativity Active Learning (Prioritized Review) solution.
These are the three takeaways I learned from this case:
Overcoming RoadblocksFirst – I utilized NexLP’s StoryEngine, which was far superior at identifying hot documents based on their author. Due to the words and structure of the documents being almost exactly the same, the client was skeptical the software could differentiate between them. However, StoryEngine picked up on hot documents based on who the author was, and we could manipulate the model to prioritize this feature even more. This allowed us to overcome a seemingly unavoidable roadblock in our workflow.
Fully Utilizing the ToolsSecond – propagation and near duplicates were a hurdle to efficiently and effectively reviewing the document set. While you can turn propagation on in Relativity active learning, StoryEngine does it automatically which streamlines the process. This prevents simply overlooking the feature and not understanding why you aren’t obtaining your expected results or having to backtrack your process to implement it, wasting time and money.
Identifying the Smoking GunsThird – the model was able to stabilize quicker due to a combination of coverage queue and better prediction models in StoryEngine. When combined with StoryEngine’s emotional intelligence and various other capabilities, it could pinpoint specific documents of interest more efficiently and demonstrate the hidden value that could be easily overlooked or buried deep.
Concluding ThoughtsThe greatest takeaway was seeing the Client’s do a full 180 and fully embracing the technology. Although this solution isn’t a do-all be-all, the model is still very applicable in certain scenarios. This case showed the client and I that active learning can be customized and modified to fit all types of scenarios. With the right guidance and expertise, we can use analytics for all kinds of data types. The difference between being able to pull additional value out of the remaining data comes down to nuanced analysis and expertise, like we did here. I’m proud that even under these bad circumstances, I was able to turn a TAR detractor into a TAR promoter.
Be Sure to Follow Me for the Latest Content and Subscribe For the Latest Acorn Insights!