Citizen Science for Citizen Access to Law


Paper by Michael Curtotti, Wayne Weibel, Eric McCreath, Nicolas Ceynowa, Sara Frug, and Tom R Bruce: “This paper sits at the intersection of citizen access to law, legal informatics and plain language. The paper reports the results of a joint project of the Cornell University Legal Information Institute and the Australian National University which collected thousands of crowdsourced assessments of the readability of law through the Cornell LII site. The aim of the project is to enhance accuracy in the prediction of the readability of legal sentences. The study requested readers on legislative pages of the LII site to rate passages from the United States Code and the Code of Federal Regulations and other texts for readability and other characteristics. The research provides insight into who uses legal rules and how they do so. The study enables conclusions to be drawn as to the current readability of law and spread of readability among legal rules. The research is intended to enable the creation of a dataset of legal rules labelled by human judges as to readability. Such a dataset, in combination with machine learning, will assist in identifying factors in legal language which impede readability and access for citizens. As far as we are aware, this research is the largest ever study of readability and usability of legal language and the first research which has applied crowdsourcing to such an investigation. The research is an example of the possibilities open for enhancing access to law through engagement of end users in the online legal publishing environment for enhancement of legal accessibility and through collaboration between legal publishers and researchers….(More)”

New surveys reveal dynamism, challenges of open data-driven businesses in developing countries


Alla Morrison at World Bank Open Data blog: “Was there a class of entrepreneurs emerging to take advantage of the economic possibilities offered by open data, were investors keen to back such companies, were governments tuned to and responsive to the demands of such companies, and what were some of the key financing challenges and opportunities in emerging markets? As we began our work on the concept of an Open Fund, we partnered with Ennovent (India), MDIF (East Asia and Latin America) and Digital Data Divide (Africa) to conduct short market surveys to answer these questions, with a focus on trying to understand whether a financing gap truly existed in these markets. The studies were fairly quick (4-6 weeks) and reached only a small number of companies (193 in India, 70 in Latin America, 63 in South East Asia, and 41 in Africa – and not everybody responded) but the findings were fairly consistent.

  • Open data is still a very nascent concept in emerging markets. and there’s only a small class of entrepreneurs/investors that is aware of the economic possibilities; there’s a lot of work to do in the ‘enabling environment’
    • In many regions the distinction between open data, big data, and private sector generated/scraped/collected data was blurry at best among entrepreneurs and investors (some of our findings consequently are better indicators of  data-driven rather than open data-driven businesses)
  • There’s a small but growing number of open data-driven companies in all the markets we surveyed and these companies target a wide range of consumers/users and are active in multiple sectors
    • A large percentage of identified companies operate in sectors with high social impact – health and wellness, environment, agriculture, transport. For instance, in India, after excluding business analytics companies, a third of data companies seeking financing are in healthcare and a fifth in food and agriculture, and some of them have the low-income population or the rural segment of India as an intended beneficiary segment. In Latin America, the number of companies in business services, research and analytics was closely followed by health, environment and agriculture. In Southeast Asia, business, consumer services, and transport came out in the lead.
    • We found the highest number of companies in Latin America and Asia with the following countries leading the way – Mexico, Chile, and Brazil, with Colombia and Argentina closely behind in Latin America; and India, Indonesia, Philippines, and Malaysia in Asia
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia
    • We heard demand for different kinds of financing (equity, debt, working capital) but the majority of the need was for equity and quasi-equity in amounts ranging from $100,000 to $5 million USD, with averages of between $2 and $3 million USD depending on the region.
  • There’s a significant financing gap in all the markets
    • The investment sizes required, while they range up to several million dollars, are generally small. Analysis of more than 300 data companies in Latin America and Asia indicates a total estimated need for financing of more than $400 million
  • Venture capitals generally don’t recognize data as a separate sector and club data-driven companies with their standard information communication technology (ICT) investments
    • Interviews with founders suggest that moving beyond seed stage is particularly difficult for data-driven startups. While many companies are able to cobble together an initial seed round augmented by bootstrapping to get their idea off the ground, they face a great deal of difficulty when trying to raise a second, larger seed round or Series A investment.
    • From the perspective of startups, investors favor banal e-commerce (e.g., according toTech in Asia, out of the $645 million in technology investments made public across the region in 2013, 92% were related to fashion and online retail) or consumer service startups and ignore open data-focused startups even if they have a strong business model and solid key performance indicators. The space is ripe for a long-term investor with a generous risk appetite and multiple bottom line goals.
  • Poor data quality was the number one issue these companies reported.
    • Companies reported significant waste and inefficiency in accessing/scraping/cleaning data.

The analysis below borrows heavily from the work done by the partners. We should of course mention that the findings are provisional and should not be considered authoritative (please see the section on methodology for more details)….(More).”

Sensor Law


Paper by Sandra Braman: For over two decades, information policy-making for human society has been increasingly supplemented, supplanted, and/or superceded by machinic decision-making; over three decades since legal decision-making has been explicitly put in place to serve machinic rather than social systems; and over four decades since designers of the Internet took the position that they were serving non-human (machinic, or daemon) users in addition to humans. As the “Internet of Things” becomes more and more of a reality, these developments increasingly shape the nature of governance itself. This paper’s discussion of contemporary trends in these diverse modes of human-computer interaction at the system level — interactions between social systems and technological systems — introduces the changing nature of the law as a sociotechnical problem in itself. In such an environment, technological innovations are often also legal innovations, and legal developments require socio-technical analysis as well as social, legal, political, and cultural approaches.

Examples of areas in which sensors are already receiving legal attention are rife. A non-comprehensive listing includes privacy concerns beginning but not ending with those raised by sensors embedded in phones and geolocation devices, which are the most widely discussed and those of which the public is most aware. Sensor issues arise in environmental law, health law, marine law, intellectual property law, and as they are raised by new technologies in use for national security purposes that include those confidence- and security-building measures intended for peacekeeping. They are raised by liability issues for objects that range from cars to ovens. And sensor issues are at the core of concerns about “telemetric policing,” as that is coming into use not only in North America and Europe, but in societies such as that of Brazil as well.

Sensors are involved in every stage of legal processes, from identification of persons of interest to determination of judgments and consequences of judgments. Their use significantly alters the historically-developed distinction among types of decision-making meant to come into use at different stages of the process, raising new questions about when, and how, human decision-making needs to dominate and when, and how, technological innovation might need to be shaped by the needs of social rather than human systems.

This paper will focus on the legal dimensions of sensors used in ubiquitous embedded computing….(More)”

Eight ways to make government more experimental


Jonathan Breckon et al at NESTA: “When the banners and bunting have been tidied away after the May election, and a new bunch of ministers sit at their Whitehall desks, could they embrace a more experimental approach to government?

Such an approach requires a degree of humility.  Facing up to the fact that we don’t have all the answers for the next five years.  We need to test things out, evaluate new ways of doing things with the best of social science, and grow what works.  And drop policies that fail.

But how best to go about it?  Here are our 8 ways to make it a reality:

  1. Make failure OK. A more benign attitude to risk is central to experimentation.  As a 2003 Cabinet Office review entitled Trying it Out said, a pilot that reveals a policy to be flawed should be ‘viewed as a success rather than a failure, having potentially helped to avert a potentially larger political and/or financial embarrassment’. Pilots are particularly important in fast moving areas such as technology to try promising fresh ideas in real-time. Our ‘Visible Classroom’ pilot tried an innovative approach to teacher CPD developed from technology for television subtitling.
  2. Avoid making policies that are set in stone.  Allowing policy to be more project–based, flexible and time-limited could encourage room for manoeuvre, according to a previous Nesta report State of Uncertainty; Innovation policy through experimentation.  The Department for Work and Pensions’ Employment Retention and Advancement pilot scheme to help people back to work was designed to influence the shape of legislation. It allowed for amendments and learning as it was rolled out.  We need more policy experiments like this.
  3. Work with the grain of current policy environment. Experimenters need to be opportunists. We need to be nimble and flexible. Ready to seize windows of opportunity to  experiment. Some services have to be rolled out in stages due to budget constraints. This offers opportunities to try things out before going national. For instance, The Mexican Oportunidades anti-poverty experiments which eventually reached 5.8 million households in all Mexican states, had to be trialled first in a handful of areas. Greater devolution is creating a patchwork of different policy priorities, funding and delivery models – so-called ‘natural experiments’. Let’s seize the opportunity to deliberately test and compare across different jurisdictions. What about a trial of basic income in Northern Ireland, for example, along the lines of recent Finnish proposals, or universal free childcare in Scotland?
  4. Experiments need the most robust and appropriate evaluation methods such as, if appropriate, Randomised Controlled Trials. Other methods, such as qualitative research may be needed to pry open the ‘black box’ of policies – to learn about why and how things are working. Civil servants should use the government trial advice panel as a source of expertise when setting up experiments.
  5. Grow the public debate about the importance of experimentation. Facebook had to apologise after a global backlash to psychological experiments on their 689,000 users web-users. Approval by ethics committees – normal practice for trials in hospitals and universities – is essential, but we can’t just rely on experts. We need a dedicated public understanding of experimentation programmes, perhaps run by Evidence Matters or Ask for Evidence campaigns at Sense about Science. Taking part in an experiment in itself can be a learning opportunity creating  an appetite amongt the public, something we have found from running an RCT with schools.
  6. Create ‘Skunkworks’ institutions. New or improved institutional structures within government can also help with experimentation.   The Behavioural Insights Team, located in Nesta,  operates a classic ‘skunkworks’ model, semi-detached from day-to-day bureaucracy. The nine UK What Works Centres help try things out semi-detached from central power, such as the The Education Endowment Foundation who source innovations widely from across the public and private sectors- including Nesta-  rather than generating ideas exclusively in house or in government.
  7. Find low-cost ways to experiment. People sometimes worry that trials are expensive and complicated.  This does not have to be the case. Experiments to encourage organ donation by the Government Digital Service and Behavioural Insights Team involved an estimated cost of £20,000.  This was because the digital experiments didn’t involve setting up expensive new interventions – just changing messages on  web pages for existing services. Some programmes do, however, need significant funding to evaluate and budgets need to be found for it. A memo from the White House Office for Management and Budget has asked for new Government schemes seeking funding to allocate a proportion of their budgets to ‘randomized controlled trials or carefully designed quasi-experimental techniques’.
  8. Be bold. A criticism of some experiments is that they only deal with the margins of policy and delivery. Government officials and researchers should set up more ambitious experiments on nationally important big-ticket issues, from counter-terrorism to innovation in jobs and housing….(More)

Mission Control: A History of the Urban Dashboard


Futuristic control rooms have proliferated in dozens of global cities. Baltimore has its CitiStat Room, where department heads stand at a podium before a wall of screens and account for their units’ performance.  The Mayor’s office in London’s City Hall features a 4×3 array of iPads mounted in a wooden panel, which seems an almost parodic, Terry Gilliam-esque take on the Brazilian Ops Center. Meanwhile, British Prime Minister David Cameron commissioned an iPad app – the “No. 10 Dashboard” (a reference to his residence at 10 Downing Street) – which gives him access to financial, housing, employment, and public opinion data. As The Guardian reported, “the prime minister said that he could run government remotely from his smartphone.”

This is the age of Dashboard Governance, heralded by gurus like Stephen Few, founder of the “visual business intelligence” and “sensemaking” consultancy Perceptual Edge, who defines the dashboard as a “visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.” A well-designed dashboard, he says — one that makes proper use of bullet graphs, sparklines, and other visualization techniques informed by the “brain science” of aesthetics and cognition — can afford its users not only a perceptual edge, but a performance edge, too. The ideal display offers a big-picture view of what is happening in real time, along with information on historical trends, so that users can divine the how and why and redirect future action. As David Nettleton emphasizes, the dashboard’s utility extends beyond monitoring “the current situation”; it also “allows a manager to … make provisions, and take appropriate actions.”….

The dashboard market now extends far beyond the corporate world. In 1994, New York City police commissioner William Bratton adapted former officer Jack Maple’s analog crime maps to create the CompStat model of aggregating and mapping crime statistics. Around the same time, the administrators of Charlotte, North Carolina, borrowed a business idea — Robert Kaplan’s and David Norton’s “total quality management” strategy known as the “Balanced Scorecard” — and began tracking performance in five “focus areas” defined by the City Council: housing and neighborhood development, community safety, transportation, economic development, and the environment. Atlanta followed Charlotte’s example in creating its own city dashboard.

In 1999, Baltimore mayor Martin O’Malley, confronting a crippling crime rate and high taxes, designed CitiStat, “an internal process of using metrics to create accountability within his government.” (This rhetoric of data-tested internal “accountability” is prevalent in early dashboard development efforts.) The project turned to face the public in 2003, when Baltimore launched a website of city operational statistics, which inspired DCStat (2005), Maryland’s StateStat (2007), and NYCStat (2008). Since then, myriad other states and metro areas — driven by a “new managerialist” approach to urban governance, committed to “benchmarking” their performance against other regions, and obligated to demonstrate compliance with sustainability agendas — have developed their own dashboards.

The Open Michigan Mi Dashboard is typical of these efforts. The state website presents data on education, health and wellness, infrastructure, “talent” (employment, innovation), public safety, energy and environment, financial health, and seniors. You (or “Mi”) can monitor the state’s performance through a side-by-side comparison of “prior” and “current” data, punctuated with a thumbs-up or thumbs-down icon indicating the state’s “progress” on each metric. Another click reveals a graph of annual trends and a citation for the data source, but little detail about how the data are actually derived. How the public is supposed to use this information is an open question….(More)”

What Your Tweets Say About You


at the New Yorker: “How much can your tweets reveal about you? Judging by the last nine hundred and seventy-two words that I used on Twitter, I’m about average when it comes to feeling upbeat and being personable, and I’m less likely than most people to be depressed or angry. That, at least, is the snapshot provided by AnalyzeWords, one of the latest creations from James Pennebaker, a psychologist at the University of Texas who studies how language relates to well-being and personality. One of Pennebaker’s most famous projects is a computer program called Linguistic Inquiry and Word Count (L.I.W.C.), which looks at the words we use, and in what frequency and context, and uses this information to gauge our psychological states and various aspects of our personality….

Take a study, out last month, from a group of researchers based at the University of Pennsylvania. The psychologist Johannes Eichstaedt and his colleagues analyzed eight hundred and twenty-six million tweets across fourteen hundred American counties. (The counties contained close to ninety per cent of the U.S. population.) Then, using lists of words—some developed by Pennebaker, others by Eichstaedt’s team—that can be reliably associated with anger, anxiety, social engagement, and positive and negative emotions, they gave each county an emotional profile. Finally, they asked a simple question: Could those profiles help determine which counties were likely to have more deaths from heart disease?

The answer, it turned out, was yes….

The researchers have a theory: they suggest that “the language of Twitter may be a window into the aggregated and powerful effects of the community context.” They point to other epidemiological studies which have shown that general facts about a community, such as its “social cohesion and social capital,” have consequences for the health of individuals. Broadly speaking, people who live in poorer, more fragmented communities are less healthy than people living in richer, integrated ones.“When we do a sub-analysis, we find that the power that Twitter has is in large part accounted for by community and socioeconomic variables,” Eichstaedt told me when we spoke over Skype. In short, a young person’s negative, angry, and stressed-out tweets might reflect his or her stress-inducing environment—and that same environment may have negative health repercussions for other, older members of the same community….(More)”

Managerial Governance and Transparency in Public Sector to Improve Services for Citizens and Companies


Paper by Nunzio Casalino and Peter Bednar: “Recent debate and associated initiatives dealing with public sector innovation have mainly aimed at improving the effectiveness and efficiency of the delivery of public services and improved transparency and user friendliness. Beyond typical administrative reforms, innovation is expected to help address societal challenges such as the aging population, inclusion, health care, education, public safety, environment and greenhouse gas emissions reduction. The public sector consists of a complex open system of organizations with various tasks. Therefore, decision-making can be slower than in the private sector because of large chains of command. Innovations here will often have an impact across this complex organizational structure, and thus must be supported by a robust strategy. To strengthen democracy, promote government efficiency and effectiveness, discourage wastes and misuses of government resources, public administrations have to promote a new stronger level of openness in government. The purpose of this manuscript is to describe an innovative approach for the governance of public systems and services, currently applied in the Italian public administration domain, which could be easily replicated in other countries as well. Two initiatives, to collect and provide relevant public information gathered from different and heterogeneous public organizations, to improve government processes and increase quality of services for citizens and companies, are described. The cases adopted have been validated through a case analysis approach involving the Italian Agency for the public administration digitalization to understand new e-government scenarios within the context of governmental reforms heavily influenced by the principles of Open Government Model….(More)

Who Retweets Whom: How Digital And Legacy Journalists Interact on Twitter


Paper by Michael L. Barthel, Ruth Moon, and William Mari published by the Tow Center: “When bloggers and citizen journalists became fixtures of the U.S. media environment, traditional print journalists responded with a critique, as this latest Tow Center brief says. According to mainstream reporters, the interlopers were “unprofessional, unethical, and overly dependent on the very mainstream media they criticized. In a 2013 poll of journalists, 51 percent agreed that citizen journalism is not real journalism”.

However, the digital media environment, a space for easy interaction has provided opportunities for journalists of all stripes to vault the barriers between legacy and digital sectors; if not collaborating, then perhaps communicating at least.

This brief by three PhD candidates at The University of Washington, Michael L. Barthel, Ruth Moon and William Mari, takes a snapshot of how fifteen political journalists from BuzzFeed, Politico and The New York Times, interact (representing digital, hybrid and legacy outlets respectively). The researchers place those interactions in the context of reporters’ longstanding traditions of gossip, goading, collaboration and competition.

They found tribalism, pronounced most strongly in the legacy outlet, but present across each grouping. They found hierarchy and status-boosting. But those phenomena were not absolute; there were also instances of co-operation, sharing and mutual benefit. None-the-less, by these indicators at least; there was a clear pecking order: Digital and hybrid organizations’ journalists paid “more attention to traditional than digital publications”.

You can download your copy here (pdf).”

New take on game theory offers clues on why we cooperate


Alexander J Stewart at The Conversation: “Why do people cooperate? This isn’t a question anyone seriously asks. The answer is obvious: we cooperate because doing so is usually synergistic. It creates more benefit for less cost and makes our lives easier and better.
Maybe it’s better to ask why don’t people always cooperate. But the answer here seems obvious too. We don’t do so if we think we can get away with it. If we can save ourselves the effort of working with someone else but still gain the benefits of others’ cooperation. And, perhaps, we withhold cooperation as punishment for others’ past refusal to collaborate with us.
Since there are good reasons to cooperate – and good reasons not to do so – we are left with a question without an obvious answer: under what conditions will people cooperate?
Despite its seeming simplicity, this question is very complicated, from both a theoretical and an experimental point of view. The answer matters a great deal to anyone trying to create an environment that fosters cooperation, from corporate managers and government bureaucrats to parents of unruly siblings.
New research into game theory I’ve conducted with Joshua Plotkin offers some answers – but raises a lot of questions of its own too.
Traditionally, research into game theory – the study of strategic decision making – focused either on whether a rational player should cooperate in a one-off interaction or on looking for the “winning solutions” that allow an individual who wants to cooperate make the best decisions across repeated interactions.
Our more recent inquiries aim to understand the subtle dynamics of behavioral change when there are an infinite number of potential strategies (much like life) and the game payoffs are constantly shifting (also much like life).
By investigating this in more detail, we can better learn how to incentivize people to cooperate – whether by setting the allowance we give kids for doing chores, by rewarding teamwork in school and at work or even by how we tax to pay for public benefits such as healthcare and education.
What emerges from our studies is a complex and fascinating picture: the amount of cooperation we see in large groups is in constant flux, and incentives that mean well can inadvertently lead to less rather than more cooperative behavior….(More)”

Collective Intelligence or Group Think?


Paper analyzing “Engaging Participation Patterns in World without Oil” by Nassim JafariNaimi and Eric M. Meyers: “This article presents an analysis of participation patterns in an Alternate Reality Game, World Without Oil. This game aims to bring people together in an online environment to reflect on how an oil crisis might affect their lives and communities as a way to both counter such a crisis and to build collective intelligence about responding to it. We present a series of participation profiles based on a quantitative analysis of 1554 contributions to the game narrative made by 322 players. We further qualitatively analyze a sample of these contributions. We outline the dominant themes, the majority of which engage the global oil crisis for its effects on commute options and present micro-sustainability solutions in response. We further draw on the quantitative and qualitative analysis of this space to discuss how the design of the game, specifically its framing of the problem, feedback mechanism, and absence of subject-matter expertise, counter its aim of generating collective intelligence, making it conducive to groupthink….(More)”