Sampling as a means of quality control became widely used in World War II. The U.S. military used statistical sampling to test bullets before shipping. Obviously, it would be impracticable to test every bullet before shipping, so samples were taken and tested to ensure quality control.
While quality control in legal projects is generally not as dangerous as testing bullets (although legal matters are often war), the benefits of sampling apply: It may be impracticable to quality check every document or file before using or producing it.
We use sampling as a method of quality control on pretty much every legal project we handle, be it a managed document review connected to litigation, a cybersecurity incident response data review, or after redacting documents subject to FOIA requests.
In these situations, we use random sampling to:
Also, statistical sampling methods are often a court-sanctioned solution for common e-discovery ailments including:
Before diving into specific uses of sampling in legal projects, a little background is helpful. Using statistical sampling methodology is a way to examine data when the entire “population” of data cannot be reviewed individually.
For instance, political polls predicting election outcomes are based on a random sample of voters. Because it is difficult to poll every eligible voter, a precise sample of voters based on principles of statistics, is polled and the result is extrapolated to the entire voting population.
Random sampling is a handy tool to use in large legal document reviews and other legal projects involving large datasets because it might be difficult (or even impossible) to look at every document or file. In fact, it may not even be worth looking at certain electronically stored information (ESI) because it is unlikely to contain the information you are looking for.
Examining a random sample of data or documents collected for a legal matter will help you determine whether it is worth looking at more of the data or whether your process or techniques should be changed.
For instance, at Percipient, we use random sampling for quality control right out of the gate. Let’s say we are handling a document review in a piece of litigation and counsel asked us to tag documents for both relevance and a few legal issues. After a day or two of review, we will pull a random sample of documents for the lead attorney to review to make sure we are coding the documents correctly and not missing any issues.
The results of the sample may prompt us to have additional training, revise the review protocol, or simply confirm we are on the right track
Our use of sampling does not end after that initial QC though. Throughout a project, we continue to use sampling for quality control to make sure files are being coded properly.
As noted above, courts often prompt parties to use sampling when discovery disputes arise. For instance, a decision in an antitrust case against battery manufacturers, In re: Lithium Ion Batteries Antitrust Litigation, No. 13-MD-02420 (N.D. Cal. Feb. 24, 2015), explains how random sampling may be used to resolve disputes over keywords used to collect ESI.
After being directed by the court to devise a search term protocol, the parties in that case, agreed to the following:
1. A party responding to document requests would develop an initial list of search terms;
2. The requesting party could then suggest modifications or additions to the list;
3. If agreeable to the modified list, the responding party would apply the terms and review the corresponding documents for relevance and privilege;
4. If the responding party objected to any modified or additional search terms as being overbroad, they would compile “qualitative metrics” to facilitate further meet and confer conversations about the disputed search terms. The metrics would include the number of documents returned by any disputed search term and the nature of the irrelevant documents being returned.
Despite agreeing to the points above, the parties could not agree about resolving disputes persisting after disclosure of qualitative metrics. The plaintiffs suggested that the producing party should produce a random sample of unprivileged documents returned by the disputed search terms. The plaintiffs argued that the ability to review a random sample would provide insight as to why a seemingly relevant search term returned a disproportionate amount of irrelevant documents.
The defendants objected to random sampling pointing out it would require them to produce documents they were not otherwise obligated to produce under Fed. R. Civ. Proc. 26. Specifically, non-responsive and irrelevant documents.
Over the defendants’ objections, the court authorized random sampling. The court acknowledged that keyword searches in electronic discovery are often overinclusive, but believed that using a random sample to examine search results would ultimately prevent irrelevant documents from being reviewed and produced in the litigation. The court drew guidance from Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012), a seminal case approving random sampling in e-discovery protocols.
Although noting that Da Silva Moore dealt with predictive coding (use of software algorithms to help identify documents relevant to a legal matter), the court believed its principles were equally applicable to the search term protocol being considered by the parties. The court observed that if a random sample indicated a particular search term returned an inordinate amount of irrelevant documents, it is a bad search in need of modification.
Another court acknowledged the inherent limitations of using search terms to identify information. In Rockford v. Mallinckrodt ARD, Inc., Case No. 17 CV 50107 (N.D. Ill. Aug. 7, 2018), the parties agreed to work out search terms, and if there was a dispute over a search turning up an inordinate number of results, statistical sampling would be used to determine how many actual relevant documents the search was turning up.
Also interesting is that the court also permitted statistical sampling of the “null set”. The null sets are files that hit on no search terms and therefore, were not reviewed. To be sure relevant information was not being overlooked in the null set, the court agreed that a party producing documents should also take a statistical sample of the null set to ensure there were not a disproportionate number of relevant documents being overlooked.
ESI produced in modern litigation is not limited to e-mail and common electronic documents. Often, electronic evidence relevant to a legal matter cannot be easily reviewed because it is gross data retrieved from databases, or is a file type that is compatible only with specialized or industry-specific software.
Medical records are a good example. Although they are a common type of evidence exchanged during discovery, the review and production of electronic medical records (EMR) are not always easy or cheap.
For instance in Duffy v. Lawrence Memorial Hospital, case No. 2:14-cv-2256 (D. Kan. March 31, 2017), a hospital employee filed a whistleblower action against the hospital for allegedly submitting false reimbursement claims to the federal government.
To prove her case, the employee asked the hospital to produce all medical records for all adult emergency room patients for a seven-year period. To respond to the request, the hospital determined it would have to collect, review and produce over 15,000 medical records and it would take nearly 9000 employee hours at a cost of over $200,000.
Because of the time and expense involved, the hospital proposed reviewing and producing a random sample of the medical records. The hospital proposed using the RAT-STATS statistical tool developed by the Department of Health and Human Services Office of Inspector General. RAT-STATS is software that helps users select random samples of medical claims files to identify improper payments.
To prevent undue burden and cost, the court agreed that reviewing a random sample of the records was the way to go and permitted the hospital to review a random sample of 257 records. The court concluded further that using RAT-STATS was proper because it permitted the parties to make a “fair guess” to conclude the entire universe of medical records at issue.
Sampling is also a good method to determine whether you should stop reviewing information. Using sampling to figure out if a document review is substantially complete is called validation sampling.
A discovery order entered in In re Broiler Chicken Antitrust Case, No. 16 C 8637 (N.D. Ill. January 3, 2018), offers a good example of validation sampling. The Special Master, in that case, Maura Grossman, entered an order in the case stating that when a party reviewing documents reasonably believed it had produced or identified substantially all responsive documents, it was required to conduct validation sampling.
In a nutshell, the validation protocol required the producing party to gather a random sample of 3000 documents. The sample contained documents marked responsive, marked non-responsive, and also included unreviewed documents if a technology-assisted review was used.
A person familiar with the legal and factual issues in the case would then review the sample to determine the recall rate of the sample and provide the results of that review to the court and the requesting party. If the parties agreed that the recall rate and the number of documents identified as responsive was such that the review could be ended, then it would be. (The “recall rate” is the percentage of responsive documents in a collection that a search or review process actually finds).
Sampling in legal projects is an invaluable tool to increase accuracy, improve legal processes and implement quality control measures.
Want to know how we use sampling to QC our work? Let us know!
Other Articles You May Be Interested In
Must Lawyers Supervise the Robots? (The Legal Ethics of AI)