A New Way to Look at Law, With Data Viz and Machine Learning


  in Wired:

Ravel displays search results as an interactive visualization. Image: Ravel
“On TV, being a lawyer is all about dazzling jurors with verbal pyrotechnics. But for many lawyers–especially young ones–the job is about research. Long, dry, tedious research.
It’s that less glamorous side of the profession that Daniel Lewis and Nik Reed are trying to upend with Ravel. Using data visualization, language analysis, and machine learning, the Stanford Law grads are aiming to reinvent legal research–and perhaps give young lawyers a deeper understanding of their field in the process.
Lawyers have long relied on subscription services like LexisNexis and WestLaw to do their jobs. These services offer indispensable access to vast databases of case documents. Lewis remembers seeing the software on the computers at his Dad’s law firm when he used to hang out there as a kid. You’d put in a keyword, say, securities fraud, and get back a long, rank-ordered list of results relevant to that topic.
Years later, when Lewis was embarking on his own legal career as a first year at Stanford Law, he was struck by how little had changed. “The tools and technologies were the same,” he says. “It was surprising and disconcerting.” Reed, his classmate there, was also perplexed, especially having spent some time in the finance industry working with its high-powered tools. “There was all this cool stuff that everyone else was using in every other field, and it just wasn’t coming to lawyers,” he says.

Early users have reported that Ravel cut their overall research time by up to two thirds….

Ravel’s most ambitious features, however, are intended to help with the analysis of cases. These tools, saved for premium subscribers, are designed to automatically surface the key passages in whatever case you happen to be looking at, sussing out instances when they’ve been cited or reinterpreted in cases that followed.
To do this, Ravel effectively has to map the law, an undertaking that involves both human insight and technical firepower. The process, roughly: Lewis and Reed will look at a particular case, pinpoint the case it’s referencing, and then figure out what ties them together. It could be a direct reference, or a glancing one. It might show up as three paragraphs in that later ruling, or just a sentence.
Once those connections have been made, they’re handed off to Ravel’s engineers. The engineers, which make up more than half of the company’s ten-person team, are tasked with building models that can identify those same sorts of linkages in other cases, using natural language processing. In effect, Ravel’s trying to uncover the subtle linguistic patterns undergirding decades of legal rulings.
That all goes well beyond visual search, and the idea of future generations of lawyers learning from an algorithmic analysis of the law seems quietly dangerous in its own way (though a sterling conceit for a near-future short story!)
Still, compared to the comparatively primitive tools that still dominate the field today, Lewis and Reed see Ravel as a promising resource for young lawyers and law students. “It’s about helping them research more confidently,” Lewis says. “It’s about making sure they understand the story in the right way.” And, of course, about making all that research a little less tedious, too.”

Big Data, My Data


Jane Sarasohn-Kahn  at iHealthBeat: “The routine operation of modern health care systems produces an abundance of electronically stored data on an ongoing basis,” Sebastian Schneeweis writes in a recent New England Journal of Medicine Perspective.
Is this abundance of data a treasure trove for improving patient care and growing knowledge about effective treatments? Is that data trove a Pandora’s black box that can be mined by obscure third parties to benefit for-profit companies without rewarding those whose data are said to be the new currency of the economy? That is, patients themselves?
In this emerging world of data analytics in health care, there’s Big Data and there’s My Data (“small data”). Who most benefits from the use of My Data may not actually be the consumer.
Big focus on Big Data. Several reports published in the first half of 2014 talk about the promise and perils of Big Data in health care. The Federal Trade Commission’s study, titled “Data Brokers: A Call for Transparency and Accountability,” analyzed the business practices of nine “data brokers,” companies that buy and sell consumers’ personal information from a broad array of sources. Data brokers sell consumers’ information to buyers looking to use those data for marketing, managing financial risk or identifying people. There are health implications in all of these activities, and the use of such data generally is not covered by HIPAA. The report discusses the example of a data segment called “Smoker in Household,” which a company selling a new air filter for the home could use to target-market to an individual who might seek such a product. On the downside, without the consumers’ knowledge, the information could be used by a financial services company to identify the consumer as a bad health insurance risk.
Big Data and Privacy: A Technological Perspective,” a report from the President’s Office of Science and Technology Policy, considers the growth of Big Data’s role in helping inform new ways to treat diseases and presents two scenarios of the “near future” of health care. The first, on personalized medicine, recognizes that not all patients are alike or respond identically to treatments. Data collected from a large number of similar patients (such as digital images, genomic information and granular responses to clinical trials) can be mined to develop a treatment with an optimal outcome for the patients. In this case, patients may have provided their data based on the promise of anonymity but would like to be informed if a useful treatment has been found. In the second scenario, detecting symptoms via mobile devices, people wishing to detect early signs of Alzheimer’s Disease in themselves use a mobile device connecting to a personal couch in the Internet cloud that supports and records activities of daily living: say, gait when walking, notes on conversations and physical navigation instructions. For both of these scenarios, the authors ask, “Can the information about individuals’ health be sold, without additional consent, to third parties? What if this is a stated condition of use of the app? Should information go to the individual’s personal physicians with their initial consent but not a subsequent confirmation?”
The World Privacy Foundation’s report, titled “The Scoring of America: How Secret Consumer Scores Threaten Your Privacy and Your Future,” describes the growing market for developing indices on consumer behavior, identifying over a dozen health-related scores. Health scores include the Affordable Care Act Individual Health Risk Score, the FICO Medication Adherence Score, various frailty scores, personal health scores (from WebMD and OneHealth, whose default sharing setting is based on the user’s sharing setting with the RunKeeper mobile health app), Medicaid Resource Utilization Group Scores, the SF-36 survey on physical and mental health and complexity scores (such as the Aristotle score for congenital heart surgery). WPF presents a history of consumer scoring beginning with the FICO score for personal creditworthiness and recommends regulatory scrutiny on the new consumer scores for fairness, transparency and accessibility to consumers.
At the same time these three reports went to press, scores of news stories emerged discussing the Big Opportunities Big Data present. The June issue of CFO Magazine published a piece called “Big Data: Where the Money Is.” InformationWeek published “Health Care Dives Into Big Data,” Motley Fool wrote about “Big Data’s Big Future in Health Care” and WIRED called “Cloud Computing, Big Data and Health Care” the “trifecta.”
Well-timed on June 5, the Office of the National Coordinator for Health IT’s Roadmap for Interoperability was detailed in a white paper, titled “Connecting Health and Care for the Nation: A 10-Year Vision to Achieve an Interoperable Health IT Infrastructure.” The document envisions the long view for the U.S. health IT ecosystem enabling people to share and access health information, ensuring quality and safety in care delivery, managing population health, and leveraging Big Data and analytics. Notably, “Building Block #3” in this vision is ensuring privacy and security protections for health information. ONC will “support developers creating health tools for consumers to encourage responsible privacy and security practices and greater transparency about how they use personal health information.” Looking forward, ONC notes the need for “scaling trust across communities.”
Consumer trust: going, going, gone? In the stakeholder community of U.S. consumers, there is declining trust between people and the companies and government agencies with whom people deal. Only 47% of U.S. adults trust companies with whom they regularly do business to keep their personal information secure, according to a June 6 Gallup poll. Furthermore, 37% of people say this trust has decreased in the past year. Who’s most trusted to keep information secure? Banks and credit card companies come in first place, trusted by 39% of people, and health insurance companies come in second, trusted by 26% of people.
Trust is a basic requirement for health engagement. Health researchers need patients to share personal data to drive insights, knowledge and treatments back to the people who need them. PatientsLikeMe, the online social network, launched the Data for Good project to inspire people to share personal health information imploring people to “Donate your data for You. For Others. For Good.” For 10 years, patients have been sharing personal health information on the PatientsLikeMe site, which has developed trusted relationships with more than 250,000 community members…”

In Defense of Transit Apps


Mark Headd at Civic Innovations: “The civic technology community has a love-hate relationship with transit apps.
We love to, and often do, use the example of open transit data and the cottage industry of civic app development it has helped spawn as justification for governments releasing open data. Some of the earliest, most enduring and most successful civic applications have been built on transit data and there literally hundreds of different apps available.
The General Transit Feed Specification (GTFS), which has helped to encourage the release of transit data from dozens and dozens of transportation authorities across the country, is used as the model for the development of other open data standards. I once described work being done to develop a data standard for locations dispensing vaccinations as “GTFS for flu shots.”
bracken-tweet
But some in the civic technology community chafe at the overuse of transit apps as the example cited for the release of open data and engagement with outside civic hackers. Surely there are other examples we can point to that get at deeper, more fundamental problems with civic engagement and the operation of government. Is the best articulation of the benefits of open data and civic hacking a simple bus stop application?
Last week at Transparency Camp in DC, during a session I ran on open data, I was asked what data governments should focus on releasing as open data. I stated my belief that – at a minimum – governments should concentrate on The 3 B’s: Buses (transit data), Bullets (crime data) and Bucks (budget & expenditure data).
To be clear – transit data and the apps it helps generate are critical to the open data and civic technology movements. I think it is vital to exploring the role that transit apps have played in the development of the civic technology ecosystem and their impact on open data.

Story telling with transit data

Transit data supports more than just “next bus” apps. In fact, characterizing all transit apps this way does a disservice to the talented and creative people working to build things with transit data. Transit data supports a wide range of different visualizations that can tell an intimate, granular story about how a transit system works and how it’s operation impacts a city.
One inspiring example of this kind of app was developed recently by Mike Barry and Brian Card, and looked at the operation of MBTA in Boston. Their motive was simple:

We attempt to present this information to help people in Boston better understand the trains, how people use the trains, and how the people and trains interact with each other.

We’re able to tell nuanced stories about transit systems because the quality of data being released continues to expand and improve in quality. This happens because developers building apps in cities across the country have provided feedback to transit officials on what they want to see and the quality of what is provided.
Developers building the powerful visualizations we see today are standing on the shoulders of the people that built the “next bus” apps a few years ago. Without these humble apps, we don’t get to tell these powerful stories today.

Holding government accountable

Transit apps are about more than just getting to the train on time.
Support for transit system operations can run into the billions of dollars and affect the lives of millions of people in an urban area. With this much investment, it’s important that transit riders and taxpayers are able to hold officials accountable for the efficient operation of transit systems. To help us do this, we now have a new generation of transit apps that can examine things like the scheduled arrival and departure times of trains with their actual arrival and departure time.
Not only does this give citizens transparency into how well their transit system is being run, it offers a pathway for engagement – by knowing which routes are not performing close to scheduled times, transit riders and others can offer suggestions for changes and improvements.

A gateway to more open data

One of the most important things that transit apps can do is provide a pathway for more open data.
In Philadelphia, the city’s formal open data policy and the creation of an open data portal all followed after the efforts of a small group of developers working to obtain transit schedule data from the Southeastern Pennsylvania Transportation Authority (SEPTA). This group eventually built the region’s first transit app.
This small group pushed SEPTA to make their data open, and the Authority eventually embraced open data. This, in turn, raised the profile of open data with other city leaders and directly contributed to the adoption of an open data policy by the City of Philadelphia several years later. Without this simple transit app and the push for more open transit data, I don’t think this would have happened. Certainly not as soon as it did.
And it isn’t just big cities like Philadelphia. In Syracuse, NY – a small city with no tradition of civic hacking and no formal open data program – a group at a local hackathon decided that they wanted to build a platform for government open data.
The first data source they selected to focus on? Transit data. The first app they built? A transit app…”

The Art and Science of Data-driven Journalism


Alex Howard for the Tow Center for digital journalism: Journalists have been using data in their stories for as long as the profession has existed. A revolution in computing in the 20th century created opportunities for data integration into investigations, as journalists began to bring technology into their work. In the 21st century, a revolution in connectivity is leading the media toward new horizons. The Internet, cloud computing, agile development, mobile devices, and open source software have transformed the practice of journalism, leading to the emergence of a new term: data journalism. Although journalists have been using data in their stories for as long as they have been engaged in reporting, data journalism is more than traditional journalism with more data. Decades after early pioneers successfully applied computer-assisted reporting and social science to investigative journalism, journalists are creating news apps and interactive features that help people understand data, explore it, and act upon the insights derived from it. New business models are emerging in which data is a raw material for profit, impact, and insight, co-created with an audience that was formerly reduced to passive consumption. Journalists around the world are grappling with the excitement and the challenge of telling compelling stories by harnessing the vast quantity of data that our increasingly networked lives, devices, businesses, and governments produce every day. While the potential of data journalism is immense, the pitfalls and challenges to its adoption throughout the media are similarly significant, from digital literacy to competition for scarce resources in newsrooms. Global threats to press freedom, digital security, and limited access to data create difficult working conditions for journalists in many countries. A combination of peer-to-peer learning, mentorship, online training, open data initiatives, and new programs at journalism schools rising to the challenge, however, offer reasons to be optimistic about more journalists learning to treat data as a source. (Download the report)”

Reflections on How Designers Design With Data


Alex Bigelow, Steven Drucker, Danyel Fisher, and Miriah Meyer at Microsoft Research: “In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers’ challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.”

Let's amplify California's collective intelligence


Gavin Newsom and Ken Goldberg at the SFGate: “Although the results of last week’s primary election are still being certified, we already know that voter turnout was among the lowest in California’s history. Pundits will rant about the “cynical electorate” and wag a finger at disengaged voters shirking their democratic duties, but we see the low turnout as a symptom of broader forces that affect how people and government interact.
The methods used to find out what citizens think and believe are limited to elections, opinion polls, surveys and focus groups. These methods may produce valuable information, but they are costly, infrequent and often conducted at the convenience of government or special interests.
We believe that new technology has the potential to increase public engagement by tapping the collective intelligence of Californians every day, not just on election day.
While most politicians already use e-mail and social media, these channels are easily dominated by extreme views and tend to regurgitate material from mass media outlets.
We’re exploring an alternative.
The California Report Card is a mobile-friendly web-based platform that streamlines and organizes public input for the benefit of policymakers and elected officials. The report card allows participants to assign letter grades to key issues and to suggest new ideas for consideration; public officials then can use that information to inform their decisions.
In an experimental version of the report card released earlier this year, residents from all 58 counties assigned more than 20,000 grades to the state of California and also suggested issues they feel deserve priority at the state level. As one participant noted: “This platform allows us to have our voices heard. The ability to review and grade what others suggest is important. It enables elected officials to hear directly how Californians feel.”
Initial data confirm that Californians approve of our state’s rollout of Obamacare, but are very concerned about the future of our schools and universities.
There was also a surprise. California Report Card suggestions for top state priorities revealed consistently strong interest and support for more attention to disaster preparedness. Issues related to this topic were graded as highly important by a broad cross section of participants across the state. In response, we’re testing new versions of the report card that can focus on topics related to wildfires and earthquakes.
The report card is part of an ongoing collaboration between the CITRIS Data and Democracy Initiative at UC Berkeley and the Office of the Lieutenant Governor to explore how technology can improve public communication and bring the government closer to the people. Our hunch is that engineering concepts can be adapted for public policy to rapidly identify real insights from constituents and resist gaming by special interests.
You don’t have to wait for the next election to have your voice heard by officials in Sacramento. The California Report Card is now accessible from cell phones, desktop and tablet computers. We encourage you to contribute your own ideas to amplify California’s collective intelligence. It’s easy, just click “participate” on this website: CaliforniaReportCard.org”

Crowdsourcing and social search


at Techcrunch: “When we think of the sharing economy, what often comes to mind are sites like Airbnb, Lyft, or Feastly — the platforms that allow us to meet people for a specific reason, whether that’s a place to stay, a ride, or a meal.
But what about sharing something much simpler than that, like answers to our questions about the world around us? Sharing knowledge with strangers can offer us insight into a place we are curious about or trying to navigate, and in a more personal, efficient way than using traditional web searches.
“Sharing an answer or response to question, that is true sharing. There’s no financial or monetary exchange based on that. It’s the true meaning of [the word],” said Maxime Leroy, co-founder and CEO of a new app called Enquire.
Enquire is a new question-and-answer app, but it is unlike others in the space. You don’t have to log in via Facebook or Twitter, use SMS messaging like on Quest, or upload an image like you do on Jelly. None of these apps have taken off yet, which could be good or bad for Enquire just entering the space.
With Enquire, simply log in with a username and password and it will unlock the neighborhood you are in (the app only works in San Francisco, New York, and Paris right now). There are lists of answers to other questions, or you can post your own. If 200 people in a city sign up, the app will become available to them, which is an effort to make sure there is a strong community to gather answers from.
Leroy, who recently made a documentary about the sharing economy, realized there was “one tool missing for local communities” in the space, and decided to create this app.
“We want to build a more local-based network, and empower and increase trust without having people share all their identity,” he said.
Different social channels look at search in different ways, but the trend is definitely moving to more social searching or location-based searching, according to according to Altimeter social media analyst Rebecca Lieb. Arguably, she said, Yelp, Groupon, and even Google Maps are vertical search engines. If you want to find a nearby restaurant, pharmacy, or deal, you look to these platforms.
However, she credits Aardvark as one of the first in the space, which was a social search engine founded in 2007 that used instant messaging and email to get answers from your existing contacts. Google bought the company in 2010. It shows the idea of crowdsourcing answers isn’t new, but the engines have become “appified,” she said.
“Now it’s geo-local specific,” she said. “We’re asking a lot more of those geo-local questions because of location-based immediacy [that we want].”
Think Seamless, with which you find the food nearby that most satisfies your appetite. Even Tinder and Grindr are social search engines, Lieb said. You want to meet up with the people that are closest to you, geographically….
His challenge is to offer rewards to incite people to sign up for the app. Eventually, Leroy would like to strengthen the networks and scale Enquire to cities and neighborhoods all over the world. Once that’s in place, people can start creating their own neighborhoods — around a school or workplace, where they hang out regularly — instead of using the existing constraints.
“I may be an expert in one area, and a newbie in another. I want to emphasize the activity and content from users to give them credit to other users and build that trust,” he said.
Usually, our first instinct is to open Yelp to find the best sushi restaurant or Google to search the closest concert venue, and it will probably stay that way for some time. But the idea that the opinions and insights of other human beings, even of strangers, is becoming much more valuable because of the internet is not far-fetched.
Admit it: haven’t you had a fleeting thought of starting a Kickstarter campaign for an idea? Looked for a cheaper place to stay on Airbnb than that hotel you normally book in New York? Or considered financing someone’s business idea across the world using Kiva? If so, then you’ve engaged in social search.
Suddenly, crowdsourcing answers for the things that pique your interest on your morning walk may not seem so strange after all.”

Why Statistically Significant Studies Aren’t Necessarily Significant


Michael White in PSMagazine on how modern statistics have made it easier than ever for us to fool ourselves: “Scientific results often defy common sense. Sometimes this is because science deals with phenomena that occur on scales we don’t experience directly, like evolution over billions of years or molecules that span billionths of meters. Even when it comes to things that happen on scales we’re familiar with, scientists often draw counter-intuitive conclusions from subtle patterns in the data. Because these patterns are not obvious, researchers rely on statistics to distinguish the signal from the noise. Without the aid of statistics, it would be difficult to convincingly show that smoking causes cancer, that drugged bees can still find their way home, that hurricanes with female names are deadlier than ones with male names, or that some people have a precognitive sense for porn.
OK, very few scientists accept the existence of precognition. But Cornell psychologist Daryl Bem’s widely reported porn precognition study illustrates the thorny relationship between science, statistics, and common sense. While many criticisms were leveled against Bem’s study, in the end it became clear that the study did not suffer from an obvious killer flaw. If it hadn’t dealt with the paranormal, it’s unlikely that Bem’s work would have drawn much criticism. As one psychologist put it after explaining how the study went wrong, “I think Bem’s actually been relatively careful. The thing to remember is that this type of fudging isn’t unusual; to the contrary, it’s rampant–everyone does it. And that’s because it’s very difficult, and often outright impossible, to avoid.”…
That you can lie with statistics is well known; what is less commonly noted is how much scientists still struggle to define proper statistical procedures for handling the noisy data we collect in the real world. In an exchange published last month in the Proceedings of the National Academy of Sciences, statisticians argued over how to address the problem of false positive results, statistically significant findings that on further investigation don’t hold up. Non-reproducible results in science are a growing concern; so do researchers need to change their approach to statistics?
Valen Johnson, at Texas A&M University, argued that the commonly used threshold for statistical significance isn’t as stringent as scientists think it is, and therefore researchers should adopt a tighter threshold to better filter out spurious results. In reply, statisticians Andrew Gelman and Christian Robert argued that tighter thresholds won’t solve the problem; they simply “dodge the essential nature of any such rule, which is that it expresses a tradeoff between the risks of publishing misleading results and of important results being left unpublished.” The acceptable level of statistical significance should vary with the nature of the study. Another team of statisticians raised a similar point, arguing that a more stringent significance threshold would exacerbate the worrying publishing bias against negative results. Ultimately, good statistical decision making “depends on the magnitude of effects, the plausibility of scientific explanations of the mechanism, and the reproducibility of the findings by others.”
However, arguments over statistics usually occur because it is not always obvious how to make good statistical decisions. Some bad decisions are clear. As xkcd’s Randall Munroe illustrated in his comic on the spurious link between green jelly beans and acne, most people understand that if you keep testing slightly different versions of a hypothesis on the same set of data, sooner or later you’re likely to get a statistically significant result just by chance. This kind of statistical malpractice is called fishing or p-hacking, and most scientists know how to avoid it.
But there are more subtle forms of the problem that pervade the scientific literature. In an unpublished paper (PDF), statisticians Andrew Gelman, at Columbia University, and Eric Loken, at Penn State, argue that researchers who deliberately avoid p-hacking still unknowingly engage in a similar practice. The problem is that one scientific hypothesis can be translated into many different statistical hypotheses, with many chances for a spuriously significant result. After looking at their data, researchers decide which statistical hypothesis to test, but that decision is skewed by the data itself.
To see how this might happen, imagine a study designed to test the idea that green jellybeans cause acne. There are many ways the results could come out statistically significant in favor of the researchers’ hypothesis. Green jellybeans could cause acne in men, but not in women, or in women but not men. The results may be statistically significant if the jellybeans you call “green” include Lemon Lime, Kiwi, and Margarita but not Sour Apple. Gelman and Loken write that “researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances.” In the end, the researchers may explicitly test only one or a few statistical hypotheses, but their decision-making process has already biased them toward the hypotheses most likely to be supported by their data. The result is “a sort of machine for producing and publicizing random patterns.”
Gelman and Loken are not alone in their concern. Last year Daniele Fanelli, at the University of Edingburgh, and John Ioannidis, at Stanford University, reported that many U.S. studies, particularly in the social sciences, may overestimate the effect sizes of their results. “All scientists have to make choices throughout a research project, from formulating the question to submitting results for publication.” These choices can be swayed “consciously or unconsciously, by scientists’ own beliefs, expectations, and wishes, and the most basic scientific desire is that of producing an important research finding.”
What is the solution? Part of the answer is to not let measures of statistical significance override our common sense—not our naïve common sense, but our scientifically-informed common sense…”

Behavioural Sciences in Practice: Lessons for EU Policymakers


Chapter by Di Porto, Fabiana and Rangone, Nicoletta, for the book by Anne-Lise Sibony and Alberto Alemanno (eds), on Nudging and the Law. What can EU Learn from Behavioural Sciences? (Forthcoming): “This chapter establishes how the regulatory process should change in order to bring out and use evidence from cognitive sciences. It further discusses the impact of cognitive sciences on the regulatory toolkit, positing that, on the one hand, traditional tools should be rethought about; and, on the other, that the regulatory toolkit should be enriched by two more strategies: empowerment and nudging (where the first eases the overcoming of cognitive and behavioural limitations, while the second exploits them).