I recently had the opportunity to view some materials produced by a competitor for a client. In the materials, the competitor stated that a certain methodology they offered would predict the trial verdict with 90% accuracy if the case goes to trial (this is paraphrased, but is essentially what was claimed). As someone who has been educated on social science research, statistics, and related fields, and, even though I “only” have a Master’s degree, red flags went up when I saw this. I was shocked and horrified that this claim was in the competitor’s proposal. A 90% accuracy rate is not realistic due to the limitations of jury research including small sample size, the online nature of the research proposed by the competitor, and, in the case of the method being proposed, the extreme limitations from what was being presented to the participants. Each of these variables is concerning. Even with a large sample size, predicting the outcome based on a limited presentation (meaning many facts and issues are not addressed) is difficult. Doing so in an impersonal manner – online – further decreases confidence in the results and calls into question the inherent bias of online research due to the sampling bias – only those with computers, and nothing better to do, participate. Some of these variables can be addressed, but to claim 90% predictability is frightening in the misimpression it creates. Explaining this to a prospective client is difficult in that they may wonder why Magnus won’t make such a claim. It is because we know it would be unethical to do so – per the American Psychological Association and American Society of Trial Consulting. With large scale research (meaning 400 to 1000 participants) there are confidence levels and margins of error, as most people find in political or news polls, but it is improper to promise that small group research will predict the case outcome with 90% accuracy. It is improper, because it is impossible. The bottom line, if buying a service, look for red flags and run when you see them. If in doubt, ask, but do not accept claims that sound too good to be true. They probably are.
This is a “G rated” blog, so I will limit myself to saying there are many, many trial consultants who are full of IT (the astute reader will know what IT means in this context). Conducting small group research doesn’t predict any outcome with 90% accuracy. If it did, why would political polls assess opinions of thousands of people? Come on, think about it! Uninformed attorneys and others who have no education, training, or expertise in scientific research methodology can be forgiven for not understanding that there is no way, I repeat, no way, for a sample size of anything less than 125 research participants to yield a 90% accuracy rate. But, trial/litigation consultants who tout themselves as “experts” and sell their services to attorneys and their clients, but who do not know the fundamental properties of good research, are charlatans, to put it as politely as possible. In addition, when a scientist says something is accurate at a 90% level, he/she is referring to the accuracy of the specific research result, not the accuracy of the application of the result. Applied to the example that has infuriated David and me, if our charlatan of a competitor had actually proposed to conduct a mock trial, with 125 randomly selected participants and other facets of scientific rigor, the claim of 90% accuracy would mean there is a 90% chance the results obtained from the research process are correct. It would NEVER mean there is a 90% chance that identical results would be obtained in both the mock and actual trials. The main point of this post is that the world of trial consultants is a buyer beware environment and it is up to the attorney to know if his/her consultant can back up boastful claims or if boasting is mere puffery designed to take money from an uninformed consumer.