French digital rights bill published in ‘open democracy’ first


France24: “A proposed law on the Internet and digital rights in France has been opened to public consultation before it is debated in parliament in an “unprecedented” exercise in “open democracy”.

The text of the “Digital Republic” bill was published online on Saturday and is open to suggestions for amendments by French citizens until October 17.

It can be found on the “Digital Republic” web page, and is even available in English.

“We are opening a new page in the history of our democracy,” Prime Minister Manuel Valls said at a press conference as the consultation was launched. “This is the first time in France, or indeed in any European country, that a proposed law has been opened to citizens in this way.”

“And it won’t be the last time,” he said, adding that the move was an attempt to redress a “growing distrust of politics”.

Participants will be able to give their opinions and make suggestions for changes to the text of the bill.

Suggestions that get the highest number of public votes will be guaranteed an official response before the bill is presented to parliament.

Freedoms and fairness

In its original and unedited form, the text of the bill pushes heavily towards online freedoms as well as improving the transparency of government.

An “Open Data” policy would make official documents and public sector research available online, while a “Net Neutrality” clause would prevent Internet services such as Netflix or YouTube from paying for faster connection speeds at the expense of everyone else.

For personal freedoms, the law would allow citizens the right to recover emails, files and other data such as pictures stored on “cloud” services….(More)”

Researchers wrestle with a privacy problem


Erika Check Hayden at Nature: “The data contained in tax returns, health and welfare records could be a gold mine for scientists — but only if they can protect people’s identities….In 2011, six US economists tackled a question at the heart of education policy: how much does great teaching help children in the long run?

They started with the records of more than 11,500 Tennessee schoolchildren who, as part of an experiment in the 1980s, had been randomly assigned to high- and average-quality teachers between the ages of five and eight. Then they gauged the children’s earnings as adults from federal tax returns filed in the 2000s. The analysis showed that the benefits of a good early education last for decades: each year of better teaching in childhood boosted an individual’s annual earnings by some 3.5% on average. Other data showed the same individuals besting their peers on measures such as university attendance, retirement savings, marriage rates and home ownership.

The economists’ work was widely hailed in education-policy circles, and US President Barack Obama cited it in his 2012 State of the Union address when he called for more investment in teacher training.

But for many social scientists, the most impressive thing was that the authors had been able to examine US federal tax returns: a closely guarded data set that was then available to researchers only with tight restrictions. This has made the study an emblem for both the challenges and the enormous potential power of ‘administrative data’ — information collected during routine provision of services, including tax returns, records of welfare benefits, data on visits to doctors and hospitals, and criminal records. Unlike Internet searches, social-media posts and the rest of the digital trails that people establish in their daily lives, administrative data cover entire populations with minimal self-selection effects: in the US census, for example, everyone sampled is required by law to respond and tell the truth.

This puts administrative data sets at the frontier of social science, says John Friedman, an economist at Brown University in Providence, Rhode Island, and one of the lead authors of the education study “They allow researchers to not just get at old questions in a new way,” he says, “but to come at problems that were completely impossible before.”….

But there is also concern that the rush to use these data could pose new threats to citizens’ privacy. “The types of protections that we’re used to thinking about have been based on the twin pillars of anonymity and informed consent, and neither of those hold in this new world,” says Julia Lane, an economist at New York University. In 2013, for instance, researchers showed that they could uncover the identities of supposedly anonymous participants in a genetic study simply by cross-referencing their data with publicly available genealogical information.

Many people are looking for ways to address these concerns without inhibiting research. Suggested solutions include policy measures, such as an international code of conduct for data privacy, and technical methods that allow the use of the data while protecting privacy. Crucially, notes Lane, although preserving privacy sometimes complicates researchers’ lives, it is necessary to uphold the public trust that makes the work possible.

“Difficulty in access is a feature, not a bug,” she says. “It should be hard to get access to data, but it’s very important that such access be made possible.” Many nations collect administrative data on a massive scale, but only a few, notably in northern Europe, have so far made it easy for researchers to use those data.

In Denmark, for instance, every newborn child is assigned a unique identification number that tracks his or her lifelong interactions with the country’s free health-care system and almost every other government service. In 2002, researchers used data gathered through this identification system to retrospectively analyse the vaccination and health status of almost every child born in the country from 1991 to 1998 — 537,000 in all. At the time, it was the largest study ever to disprove the now-debunked link between measles vaccination and autism.

Other countries have begun to catch up. In 2012, for instance, Britain launched the unified UK Data Service to facilitate research access to data from the country’s census and other surveys. A year later, the service added a new Administrative Data Research Network, which has centres in England, Scotland, Northern Ireland and Wales to provide secure environments for researchers to access anonymized administrative data.

In the United States, the Census Bureau has been expanding its network of Research Data Centers, which currently includes 19 sites around the country at which researchers with the appropriate permissions can access confidential data from the bureau itself, as well as from other agencies. “We’re trying to explore all the available ways that we can expand access to these rich data sets,” says Ron Jarmin, the bureau’s assistant director for research and methodology.

In January, a group of federal agencies, foundations and universities created the Institute for Research on Innovation and Science at the University of Michigan in Ann Arbor to combine university and government data and measure the impact of research spending on economic outcomes. And in July, the US House of Representatives passed a bipartisan bill to study whether the federal government should provide a central clearing house of statistical administrative data.

Yet vast swathes of administrative data are still inaccessible, says George Alter, director of the Inter-university Consortium for Political and Social Research based at the University of Michigan, which serves as a data repository for approximately 760 institutions. “Health systems, social-welfare systems, financial transactions, business records — those things are just not available in most cases because of privacy concerns,” says Alter. “This is a big drag on research.”…

Many researchers argue, however, that there are legitimate scientific uses for such data. Jarmin says that the Census Bureau is exploring the use of data from credit-card companies to monitor economic activity. And researchers funded by the US National Science Foundation are studying how to use public Twitter posts to keep track of trends in phenomena such as unemployment.

 

….Computer scientists and cryptographers are experimenting with technological solutions. One, called differential privacy, adds a small amount of distortion to a data set, so that querying the data gives a roughly accurate result without revealing the identity of the individuals involved. The US Census Bureau uses this approach for its OnTheMap project, which tracks workers’ daily commutes. ….In any case, although synthetic data potentially solve the privacy problem, there are some research applications that cannot tolerate any noise in the data. A good example is the work showing the effect of neighbourhood on earning potential3, which was carried out by Raj Chetty, an economist at Harvard University in Cambridge, Massachusetts. Chetty needed to track specific individuals to show that the areas in which children live their early lives correlate with their ability to earn more or less than their parents. In subsequent studies5, Chetty and his colleagues showed that moving children from resource-poor to resource-rich neighbourhoods can boost their earnings in adulthood, proving a causal link.

Secure multiparty computation is a technique that attempts to address this issue by allowing multiple data holders to analyse parts of the total data set, without revealing the underlying data to each other. Only the results of the analyses are shared….(More)”

Data Collaboratives: Sharing Public Data in Private Hands for Social Good


Beth Simone Noveck (The GovLab) in Forbes: “Sensor-rich consumer electronics such as mobile phones, wearable devices, commercial cameras and even cars are collecting zettabytes of data about the environment and about us. According to one McKinsey study, the volume of data is growing at fifty percent a year. No one needs convincing that these private storehouses of information represent a goldmine for business, but these data can do double duty as rich social assets—if they are shared wisely.

Think about a couple of recent examples: Sharing data held by businesses and corporations (i.e. public data in private hands) can help to improve policy interventions. California planners make water allocation decisions based upon expertise, data and analytical tools from public and private sources, including Intel, the Earth Research Institute at the University of California at Santa Barbara, and the World Food Center at the University of California at Davis.

In Europe, several phone companies have made anonymized datasets available, making it possible for researchers to track calling and commuting patterns and gain better insight into social problems from unemployment to mental health. In the United States, LinkedIn is providing free data about demand for IT jobs in different markets which, when combined with open data from the Department of Labor, helps communities target efforts around training….

Despite the promise of data sharing, these kind of data collaboratives remain relatively new. There is a need toaccelerate their use by giving companies strong tax incentives for sharing data for public good. There’s a need for more study to identify models for data sharing in ways that respect personal privacy and security and enable companies to do well by doing good. My colleagues at The GovLab together with UN Global Pulse and the University of Leiden, for example, published this initial analysis of terms and conditions used when exchanging data as part of a prize-backed challenge. We also need philanthropy to start putting money into “meta research;” it’s not going to be enough to just open up databases: we need to know if the data is good.

After years of growing disenchantment with closed-door institutions, the push for greater use of data in governing can be seen as both a response and as a mirror to the Big Data revolution in business. Although more than 1,000,000 government datasets about everything from air quality to farmers markets are openly available online in downloadable formats, much of the data about environmental, biometric, epidemiological, and physical conditions rest in private hands. Governing better requires a new empiricism for developing solutions together. That will depend on access to these private, not just public data….(More)”

Why interdisciplinary research matters


Special issue of Nature: “To solve the grand challenges facing society — energy, water, climate, food, health — scientists and social scientists must work together. But research that transcends conventional academic boundaries is harder to fund, do, review and publish — and those who attempt it struggle for recognition and advancement (see World View, page 291). This special issue examines what governments, funders, journals, universities and academics must do to make interdisciplinary work a joy rather than a curse.

A News Feature on page 308 asks where the modern trend for interdisciplinary research came from — and finds answers in the proliferation of disciplines in the twentieth century, followed by increasingly urgent calls to bridge them. An analysis of publishing data explores which fields and countries are embracing interdisciplinary research the most, and what impact such research has (page 306). Onpage 313, Rick Rylance, head of Research Councils UK and himself a researcher with one foot in literature and one in neuroscience, explains why interdisciplinarity will be the focus of a 2015–16 report from the Global Research Council. Around the world, government funding agencies want to know what it is, whether they should they invest in it, whether they are doing so effectively and, if not, what must change.

How can scientists successfully pursue research outside their comfort zone? Some answers come from Rebekah Brown, director of Monash University’s Monash Sustainability Institute in Melbourne, Australia, and her colleagues. They set out five principles for successful interdisciplinary working that they have distilled from years of encouraging researchers of many stripes to seek sustainability solutions (page 315). Similar ideas help scientists, curators and humanities scholars to work together on a collection that includes clay tablets, papyri, manuscripts and e-mail archives at the John Rylands Research Institute in Manchester, UK, reveals its director, Peter Pormann, on page 318.

Finally, on page 319, Clare Pettitt reassesses the multidisciplinary legacy of Richard Francis Burton — Victorian explorer, ethnographer, linguist and enthusiastic amateur natural scientist who got some things very wrong, but contributed vastly to knowledge of other cultures and continents. Today’s would-be interdisciplinary scientists can draw many lessons from those of the past — and can take our polymathy quiz online at nature.com/inter. (Nature special:Interdisciplinarity)

Algorithm predicts and prevents train delays two hours in advance


Springwise: “Transport apps such as Ototo make it easier than ever for passengers to stay informed about problems with public transport, but real-time information can only help so much — by the time users find out about a delayed service, it is often too late to take an alternative route. Now, Stockholmstag — the company that runs Sweden’s trains — have found a solution in the form of an algorithm called ‘The Commuter Prognosis’, which can predict network delays up to two hours in advance, giving train operators time to issue extra services or provide travelers with adequate warning.
The system was created by mathematician Wilhelm Landerholm. It uses historical data to predict how a small delay, even as little as two minutes, will affect the running of the rest of the network. Often the initial late train causes a ripple effect, with subsequent services being delayed to accommodate new platform arrival time, which then affect subsequent trains, and so on. But soon, using ‘The Commuter Prognosis’, Stockholmstag train operators will be able to make the necessary adjustments to prevent this. In addition, the information will be relayed to commuters, enabling them to take a different train and therefore reducing overcrowding. The prediction tool is expected to be put into use in Sweden by the end of the year….(More)”

Crowdsourcing a solution works best if some don’t help


Sarah Scoles at the New Scientist: “There are those who edit Wikipedia entries for accuracy – and those who use the online encyclopaedia daily without ever contributing. A new mathematical model says that’s probably as it should be: crowdsourcing a problem works best when a certain subset of the population chooses not to participate.

“In most social undertakings, there is a group that actually joins forces and works,” says Zoran Levnajic at the University of Ljubljana, Slovenia. “And there is a group of free-riders that typically benefits from work being done, without contributing much.”

Levnajic and his colleagues simulated this scenario. Digital people in a virtual population each had a randomly assigned tendency to collaborate on a problem or “freeload” – working alone and not sharing their findings. The team ran simulations to see whether there was an optimum crowdsource size for problem-solving.

It turned out there was – and surprisingly, the most effective crowd was not the largest possible. In fact, the simulated society was at its problem-solving best when just half the population worked together.

Smaller crowds contained too few willing collaborators with contrasting but complementary perspectives to effectively solve a problem. But if the researchers ran simulations with larger crowds, the freeloaders it contained naturally “defected” to working alone – knowing that they could benefit from any solutions the crowd reached, while also potentially reaping huge benefits if they could solve the problem without sharing the result (arxiv.org/abs/1506.09155)….(More)”

Revolution Delayed: The Impact of Open Data on the Fight against Corruption


Report by RiSSC – Research Centre on Security and Crime (Italy): “In the recent years, the demand for Open Data picked up stream among stakeholders to increasing transparency and accountability of the Public Sector. Governments are supporting Open Data supply, to achieve social and economic benefits, return on investments, and political consensus.

While it is self-evident that Open Data contributes to greater transparency – as it makes data more available and easy to use by the public and governments, its impact on fighting corruption largely depends on the ability to analyse it and develop initiatives that trigger both social accountability mechanisms, and government responsiveness against illicit or inappropriate behaviours.

To date, Open Data Revolution against corruption is delayed. The impact of Open Data on the prevention and repression of corruption, and on the development of anti- corruption tools, appears to be limited, and the return on investments not yet forthcoming. Evidence remains anecdotal, and a better understanding on the mechanisms and dynamics of using Open Data against corruption is needed.

The overall objective of this exploratory study is to provide evidence on the results achieved by Open Data, and recommendations for the European Commission and Member States’ authorities, for the implementation of effective anti-corruption strategies based on transparency and openness, to unlock the potential impact of “Open Data revolution” against Corruption.

The project has explored the legal framework and the status of implementation of Open Data policies in four EU Countries – Italy, United Kingdom, Spain, and Austria. TACOD project has searched for evidence on Open Data role on law enforcement cooperation, anti-corruption initiatives, public campaigns, and investigative journalism against corruption.

RiSSC – Research Centre on Security and Crime (Italy), the University of Oxford and the University of Nottingham (United Kingdom), Transparency International (Italy and United Kingdom), the Institute for Conflict Resolution (Austria), and Blomeyer&Sanz (Spain), have carried out the research between January 2014 and February 2015, under an agreement with the European Commission – DH Migration and Home Affairs. The project has been coordinated by RiSSC, with the support of a European Working Group of Experts, chaired by prof. Richard Rose, and an external evaluator, Mr. Andrea Menapace, and it has benefited from the contribution of many experts, activists, representatives of Institutions in the four Countries….(More)

Open governance systems: Doing more with more


Paper by Jeremy Millard in Government Information Quarterly: “This paper tackles many of the important issues and discussions taking place in Europe and globally about the future of the public sector and how it can use Information and Communication Technology (ICT) to respond innovatively and effectively to some of the acute societal challenges arising from the financial crisis as well as other deeper rooted global problems. These include inequality, poverty, corruption and migration, as well as climate change, loss of habitat and the ageing society. A conceptual framework for open governance systems enabled by ICT is proposed, drawing on evidence and examples from around the world as well as a critical appraisal of both academic and grey literature. The framework constructs a system of open assets, open services and open engagement, and this is used to move the e-government debate forward from a preoccupation with lean and small governments which ‘do more with less’ to examine the potential for open governance systems to also ‘do more with more’. This is achieved by enabling an open government and open public sector, as part of this open governance system, to ‘do more by leveraging more’ of the existing assets and resources across the whole of society, and not just within the public sector, many of which are unrealised and untapped, so in effect are ‘wasted’. The paper argues that efficiencies and productivity improvements are essential at all levels and across all actors, as is maximising both public and private value, but that they must also be seen at the societal level where trade-offs and interactions are required, and not only at the individual actor level….(More)”

Who you are/where you live: do neighbourhood characteristics explain co-production?


Paper by Peter Thijssen and Wouter Van Dooren in the International Review of Administrative Sciences: “Co-production establishes an interactive relationship between citizens and public service providers. Successful co-production hence requires the engagement of citizens. Typically, individual characteristics such as age, gender, and income are used to explain why citizens co-produce. In contrast, neighbourhood-level variables receive less attention. Nevertheless, the co-production literature, as well as social capital and urban planning theory, provides good arguments why neighbourhood variables may be relevant. In this study, we examine the administrative records of citizen-initiated contacts in a reporting programme for problems in the public domain. This co-production programme is located in the district of Deurne in the city of Antwerp, Belgium. A multilevel analysis is used to simultaneously assess the impact of neighbourhood characteristics and individual variables. While the individual variables usually found to explain co-production are present in our case, we also find that neighbourhood characteristics significantly explain co-production. Thus, our findings suggest that participation in co-production activities is determined not only by who you are, but also by where you live.

Points for practitioners In order to facilitate co-production and participation, the neighbourhood should be the first place to look. Co-production benefits may disproportionaly accrue to strong citizens, but also to strong neighbourhoods. Social corrections should take both into account. More broadly, a good understanding of the neighbourhoods in the city is needed to grasp citizen behaviour. Place-based policies in the city should focus on the neighbourhood….(More)”

Video app provides underserved clients with immediate legal advice


Springwise: “Pickle is a video call app that gives everyone access to a greater understanding of their constitutional rights, via on-demand legal advice.

Legal representation is expensive and we have already seen platforms in the US and the UK use crowdfunding to help underprivileged clients fund legal battles. Now, Pickle Legal is helping in a different way — it enables video calls between clients and attorneys, which will give everyone access to a greater understanding of their constitutional rights.

Pickle connects clients with legal representation via real-time video communication. Anyone in need of legal advice can download the app to their smartphone. When they launch the app, Pickle alerts their network of attorneys and connects the client with an available professional via a video call. The client can then gain immediate advice from the attorney — helping them to understand their position and rights in the moment.

Pickle Legal is currently in Beta and accepting applications from attorneys and clients alike. During the testing phase, the service is available for free, but eventually clients will pay an affordable rate — since the convenience of the platform is expected to reduce costs. Pickle will also be archiving videos — at the discretion of the parties involved — for use in any case that arises…(More)”