The legal macroscope: Experimenting with visual legal analytics


Nicola Lettieri, Antonio Altamura and Delfina Malandrino at InfoVis: “This work presents Knowlex, a web application designed for visualization, exploration, and analysis of legal documents coming from different sources. Understanding the legal framework relating to a given issue often requires the analysis of complex legal corpora. When a legal professional or a citizen tries to understand how a given phenomenon is disciplined, his attention cannot be limited to a single source of law but has to be directed on the bigger picture resulting from all the legal sources related to the theme under investigation. Knowlex exploits data visualization to support this activity by means of interactive maps making sense out of heterogeneous documents (norms, case law, legal literature, etc.).

Starting from a legislative measure (what we define as Root) given as input by the user, the application implements two visual analytics functionalities aiming to offer new insights on the legal corpus under investigation. The first one is an interactive node graph depicting relations and properties of the documents. The second one is a zoomable treemap showing the topics, the evolution, and the dimension of the legal literature settled over the years around the norm of interest. The article gives an overview of the research so far conducted presenting the results of a preliminary evaluation study aiming at evaluating the effectiveness of visualization in supporting legal activities as well as the effectiveness of Knowlex, the usability of the proposed system, and the overall user satisfaction when interacting with its applications…(More)”.

Group Privacy in Times of Big Data. A Literature Review


Paula Helm at Digital Culture & Society: “New technologies pose new challenges on the protection of privacy and they stimulate new debates on the scope of privacy. Such debates usually concern the individuals’ right to control the flow of his or her personal information. The article however discusses new challenges posed by new technologies in terms of their impact on groups and their privacy. Two main challenges are being identified in this regard, both having to do with the formation of groups through the involvement of algorithms and the lack of civil awareness regarding the consequences of this involvement. On the one hand, there is the phenomenon of groups being created on the basis of big data without the members of such groups being aware of having been assigned and being treated as part of a certain group. Here, the challenge concerns the limits of personal law, manifesting with the disability of individuals to address possible violations of their right to privacy since they are not aware of them. On the other hand, commercially driven Websites influence the way in which groups form, grow and communicate when doing this online and they do this in such subtle way, that members oftentimes do not take into account this influence. This is why one could speak of a kind of domination here, which calls for legal regulation. The article presents different approaches addressing and dealing with those two challenges, discussing their strengths and weaknesses. Finally, a conclusion gathers the insights reached by the different approaches discussed and reflects on future challenges for further research on group privacy in times of big data….(More)”

The Algorithm as a Human Artifact: Implications for Legal {Re}Search


Paper by Susan Nevelow Mart: “When legal researchers search in online databases for the information they need to solve a legal problem, they need to remember that the algorithms that are returning results to them were designed by humans. The world of legal research is a human-constructed world, and the biases and assumptions the teams of humans that construct the online world bring to the task are imported into the systems we use for research. This article takes a look at what happens when six different teams of humans set out to solve the same problem: how to return results relevant to a searcher’s query in a case database. When comparing the top ten results for the same search entered into the same jurisdictional case database in Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw, the results are a remarkable testament to the variability of human problem solving. There is hardly any overlap in the cases that appear in the top ten results returned by each database. An average of forty percent of the cases were unique to one database, and only about 7% of the cases were returned in search results in all six databases. It is fair to say that each different set of engineers brought very different biases and assumptions to the creation of each search algorithm. One of the most surprising results was the clustering among the databases in terms of the percentage of relevant results. The oldest database providers, Westlaw and Lexis, had the highest percentages of relevant results, at 67% and 57%, respectively. The newer legal database providers, Fastcase, Google Scholar, Casetext, and Ravel, were also clustered together at a lower relevance rate, returning approximately 40% relevant results.

Legal research has always been an endeavor that required redundancy in searching; one resource does not usually provide a full answer, just as one search will not provide every necessary result. The study clearly demonstrates that the need for redundancy in searches and resources has not faded with the rise of the algorithm. From the law professor seeking to set up a corpus of cases to study, the trial lawyer seeking that one elusive case, the legal research professor showing students the limitations of algorithms, researchers who want full results will need to mine multiple resources with multiple searches. And more accountability about the nature of the algorithms being deployed would allow all researchers to craft searches that would be optimally successful….(More)”.

Privacy of Public Data


Paper by Kirsten E. Martin and Helen Nissenbaum: “The construct of an information dichotomy has played a defining role in regulating privacy: information deemed private or sensitive typically earns high levels of protection, while lower levels of protection are accorded to information deemed public or non-sensitive. Challenging this dichotomy, the theory of contextual integrity associates privacy with complex typologies of information, each connected with respective social contexts. Moreover, it contends that information type is merely one among several variables that shape people’s privacy expectations and underpin privacy’s normative foundations. Other contextual variables include key actors – information subjects, senders, and recipients – as well as the principles under which information is transmitted, such as whether with subjects’ consent, as bought and sold, as required by law, and so forth. Prior work revealed the systematic impact of these other variables on privacy assessments, thereby debunking the defining effects of so-called private information.

In this paper, we shine a light on the opposite effect, challenging conventional assumptions about public information. The paper reports on a series of studies, which probe attitudes and expectations regarding information that has been deemed public. Public records established through the historical practice of federal, state, and local agencies, as a case in point, are afforded little privacy protection, or possibly none at all. Motivated by progressive digitization and creation of online portals through which these records have been made publicly accessible our work underscores the need for more concentrated and nuanced privacy assessments, even more urgent in the face of vigorous open data initiatives, which call on federal, state, and local agencies to provide access to government records in both human and machine readable forms. Within a stream of research suggesting possible guard rails for open data initiatives, our work, guided by the theory of contextual integrity, provides insight into the factors systematically shaping individuals’ expectations and normative judgments concerning appropriate uses of and terms of access to information.

Using a factorial vignette survey, we asked respondents to rate the appropriateness of a series of scenarios in which contextual elements were systematically varied; these elements included the data recipient (e.g. bank, employer, friend,.), the data subject, and the source, or sender, of the information (e.g. individual, government, data broker). Because the object of this study was to highlight the complexity of people’s privacy expectations regarding so-called public information, information types were drawn from data fields frequently held in public government records (e.g. voter registration, marital status, criminal standing, and real property ownership).

Our findings are noteworthy on both theoretical and practical grounds. In the first place, they reinforce key assertions of contextual integrity about the simultaneous relevance to privacy of other factors beyond information types. In the second place, they reveal discordance between truisms that have frequently shaped public policy relevant to privacy. …(More)”

 

‘’Everyone sees everything’: Overhauling Ukraine’s corrupt contracting sector


Open Contracting Stories: “When Yuriy Bugay, a Maidan revolutionary, showed up for work at Kiev’s public procurement office for the first time, it wasn’t the most uplifting sight. The 27-year-old had left his job in the private sector after joining a group of activists during the protests in Kiev’s main square, with dreams of reforming Ukraine’s dysfunctional public institutions. They chose one of the country’s most broken sectors, public procurement, as their starting point, and within a year, their project had been adopted by Ukraine’s economy ministry, Bugay’s new employer.

…The initial team behind the reform was made up of an eclectic bunch of several hundreds volunteers that included NGO workers, tech experts, businesspeople and civil servants. They decided the best way to make government deals more open was to create an e-procurement system, which they called ProZorro (meaning “transparent” in Ukrainian). Built on open source software, the system has been designed to make it possible for government bodies to conduct procurement deals electronically, in a transparent manner, while also making the state’s information about public contracts easily accessible online for anyone to see. Although it was initially conceived as a tool for fighting corruption, the potential benefits of the system are much broader — increasing competition, reducing the time and money spent on contracting processes, helping buyers make better decisions and making procurement fairer for suppliers….

In its pilot phase, ProZorro saved over UAH 1.5 billion (US$55 million) for more than 3,900 government agencies and state-owned enterprises across Ukraine. This pilot, which won a prestigious World Procurement Award in 2016, was so successful that Ukraine’s parliament passed a new public procurement law requiring all government contracting to be carried out via ProZorro from 1 August 2016. Since then, potential savings to the procurement budget have snowballed. As of November 2016, they stand at an estimated UAH 5.97 billion (US$233 million), with more than 15,000 buyers and 47,000 commercial suppliers using the new system.

At the same time, the team behind the project has evolved and professionalized….(More)”

How Should a Society Be?


Brian Christian: “This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit….(More) (Video)”

Using open government for climate action


Elizabeth Moses at Eco-Business: “Countries made many national climate commitments as part of the Paris Agreement on climate change, which entered into force earlier this month. Now comes the hard part of implementing those commitments. The public can serve an invaluable watchdog role, holding governments accountable for following through on their targets and making sure climate action happens in a way that’s fair and inclusive.

But first, the climate and open government communities will need to join forces….

Here are four areas where these communities can lean in together to ensure governments follow through on effective climate action:

1) Expand access to climate data and information.

Open government and climate NGOs and local communities can expand the use of traditional transparency tools and processes such as Freedom of Information (FOI) laws, transparent budgeting, open data policies and public procurement to enhance open information on climate mitigation, adaptation and finance.

For example, Transparencia Mexicana used Mexico’s Freedom of Information Law to collect data to map climate finance actors and the flow of finance in the country. This allows them to make specific recommendations on how to safeguard climate funds against corruption and ensure the money translates into real action on the ground….

2) Promote inclusive and participatory climate policy development.

Civil society and community groups already play a crucial role in advocating for climate action and improving climate governance at the national and local levels, especially when it comes to safeguarding poor and vulnerable people, who often lack political voice….

3) Take legal action for stronger accountability.

Accountability at a national level can only be achieved if grievance mechanisms are in place to address a lack of transparency or public participation, or address the impact of projects and policies on individuals and communities.

Civil society groups and individuals can use legal actions like climate litigation, petitions, administrative policy challenges and court cases at the national, regional or international levels to hold governments and businesses accountable for failing to effectively act on climate change….

4) Create new spaces for advocacy.

Bringing the climate and open government movements together allows civil society to tap new forums for securing momentum around climate policy implementation. For example, many civil society NGOs are highlighting the important connections between a strong Governance Goal 16 under the 2030 Agenda for Sustainable Development, and strong water quality and climate change policies….(More)”

Big data promise exponential change in healthcare


Gonzalo Viña in the Financial Times (Special Report: ): “When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.

GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.

Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.

This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction….

Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite….

The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.

Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”… (More)

Data can become Nigeria’s new ‘black gold’


Labi Ogunbiyi in the Financial Times: “In the early 2000s I decided to leave my job heading the African project finance team in a global law firm to become an investor. My experience of managing big telecoms, infrastructure and energy transactions — and, regrettably, often disputes — involving governments, project sponsors, investors, big contractors, multilateral and development agencies had left me dissatisfied. Much of the ownership of the assets being fought over remained in the hands of international conglomerates. Africa’s lack of capacity to raise the capital to own them directly — and to develop the technical skills necessary for growth — was a clear weakness…

Yet, nearly 15 years after the domestic oil and gas sector began to evolve, oil is no longer the country’s only “black gold”. If I take a comparative look at how Nigeria’s energy sector has evolved since the early 2000’s, compared with how its ICT and broader technology industry has emerged, and the opportunities that both represent for the future, the contrast is stark. Nigeria, and the rest of the continent, has been enjoying a technology revolution and the opportunity that it represents has the potential to affect every sector of the economy. According to Africa Infotech Consulting, Nigeria’s mobile penetration rate — a measure of the number of devices by population — is more than 90 per cent, less than 20 years after the first mobile network appeared on the continent. Recent reports suggest more than 10 per cent of Nigerians have a smartphone. The availability and cost of fast data have improved dramatically….(More)”

New Data Portal to analyze governance in Africa