When Crowdsourcing Works (And Doesn’t Work) In The Law


LawXLab: “Crowdsourcing has revolutionized several industries.  Wikipedia has replaced traditional encyclopedias.  Stack Overflow houses the collective knowledge of software engineering.  And genealogical information stretches back thousands of years.  All due to crowdsourcing.

These successes have led to several attempts to crowdsource the law.  The potential is enticing.  The law is notoriously difficult to access, especially for non-lawyers.  Amassing the collective knowledge of the legal community could make legal research easier for lawyers, and open the law to lay people, reshaping the legal industry and displacing traditional powers like Westlaw and Lexis. As one legal crowdsourcing site touted, “No lawyer is smarter than all lawyers.”

But crowdsourcing the law has proved difficult.  The list of failed legal crowdsourcing sites is long.  As one legal commentator noted, “The legal Web is haunted by the spirits of the many crowdsourced sites that have come and gone.” (Ambrogi http://goo.gl/ZPuXh8).  …

There are several aspects of the law that make crowdsourcing difficult.  First, the base of contributors is not large.  According to the ABA, there were only 1.3 million licensed lawyers in 2015. (http://goo.gl/kw6Kab).  Second, there is no ethos of sharing information, like there is in other fields.  To the contrary, there is a tradition of keeping information secret, enshrined in rules regarding privilege, work product protection, and trade secrets.  Legal professionals disclose information with caution.

Every attempt to crowdsource the law, however, has not been a failure.  And the successes chart a promising path forward.  While lawyers will not go out of their way to crowdsource the law, attempts to weave crowdsourcing into activities that legal professionals already perform have achieved promising results.

For example, Casetext’s WeCite initiative has proved immensely successful.  When a judge cites another case in a published opinion, WeCite asks the reader to characterize case references as (1) positive, (2) referencing, (3) distinguishing, or (4) negative.  In 9 months, Casetext’s community had crowdsourced “over 300,000 citator entries.” (CALI https://goo.gl/yT9mc4.)  CaseText used these entries to fuel its flagship product, CARA.  CARA uses those crowdsourced citation entries to suggest other cases for litigators to cite.

The key to WeCite’s success is that it weaved crowdsourcing into an activity that lawyers and law students were already doing–reading cases.  All the reader needed to do was click a button to signify how the case was cited–a minor detour.

Another example is CO/COUNSEL, a site that crowdsources visual maps of the law. The majority of CO/COUNSEL’s crowdsourced contributions come from law school classes.  Teachers use the site as a teaching tool.  Classes map the law during the course of a semester as a learning activity.  In a few months, CO/COUNSEL received over 10,000 contributions.  As with WeCite, using CO/COUNSEL was not a big detour for professors.  It fit into an activity they were performing already–teaching….(More)”.