Citizen Participation: A Critical Look at the Democratic Adequacy of Government Consultations


John Morison at Oxford Journal of Legal Studies: “Consultation procedures are used increasingly in the United Kingdom and elsewhere. This account looks critically at consultation as presently practised, and suggests that consulters and consultees need to do much more to ensure both the participatory validity and democratic value of such exercises. The possibility of a ‘right to be consulted’ is examined. Some ideas from a governmentality perspective are developed, using the growth of localism as an example, to suggest that consultation is often a very structured interaction: the actual operation of participation mechanisms may not always create a space for an equal exchange between official and participant views. Examples of best practice in consultation are examined, before consideration is given to recent case law from the UK seeking to establish basic ground rules for how consultations should be organised. Finally, the promise of consultation to reinvigorate democracy is evaluated and weighed against the correlative risk of ‘participatory disempowerment’…(More)”.

Handbook of Cyber-Development, Cyber-Democracy, and Cyber-Defense


Living Reference Work” edited by Elias G. CarayannisDavid F. J. Campbell, and Marios Panagiotis Efthymiopoulos: “This volume covers a wide spectrum of issues relating to economic and political development enabled by information and communication technology (ICT). Showcasing contributions from researchers, industry leaders and policymakers, this Handbook provides a comprehensive overview of the challenges and opportunities created by technological innovations that are profoundly affecting the dynamics of economic growth, promotion of democratic principles, and the protection of individual, national, and regional rights. Of particular interest is the influence of ICT on the generation and dissemination of knowledge, which, in turn, empowers citizens and accelerates change across all strata of society. Each essay features literature reviews and key references; definition of critical terms and concepts, case examples; implications for practice, policy and theory; and discussion of future directions. Representing such fields as management, political science, economics, law, psychology and education, the authors cover such timely topics as health care, energy and environmental policy, banking and finance, disaster recovery, investment in research and development, homeland security and diplomacy in the context of ICT and its economic, political and social impact…(More)”

Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for


Paper by Lilian Edwards and Michael Veale: “Algorithms, particularly of the machine learning (ML) variety, are increasingly consequential to individuals’ lives but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively presents as a means to “open the black box”, hence allowing individual challenge and redress, as well as possibilities to foster accountability of ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.

However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as “meaningful information about the logic of processing” — is unlikely to be provided by the kind of ML “explanations” computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However (section 5) “subject-centric” explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers’ worries of IP or trade secrets disclosure.

As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy”. However, in our final section, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (“right to be forgotten”) and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds of building a better, more respectful and more user-friendly algorithmic society….(More)”

The law and big data


Article by Felin, Teppo, Devins, Caryn, Kauffman, Stuart and Koppl, Roger: “In this article we critically examine the use of Big Data in the legal system. Big Data is driving a trend towards behavioral optimization and “personalized law,” in which legal decisions and rules are optimized for best outcomes and where law is tailored to individual consumers based on analysis of past data. Big Data, however, has serious limitations and dangers when applied in the legal context. Advocates of Big Data make theoretically problematic assumptions about the objectivity of data and scientific observation. Law is always theory-laden. Although Big Data strives to be objective, law and data have multiple possible meanings and uses and thus require theory and interpretation in order to be applied. Further, the meanings and uses of law and data are indefinite and continually evolving in ways that cannot be captured or predicted by Big Data.

Due to these limitations, the use of Big Data will likely generate unintended consequences in the legal system. Large-scale use of Big Data will create distortions that adversely influence legal decision-making, causing irrational herding behaviors in the law. The centralized nature of the collection and application of Big Data also poses serious threats to legal evolution and democratic accountability. Furthermore, its focus on behavioral optimization necessarily restricts and even eliminates the local variation and heterogeneity that makes the legal system adaptive. In all, though Big Data has legitimate uses, this article cautions against using Big Data to replace independent legal judgment….(More)”

We use big data to sentence criminals. But can the algorithms really tell us what we need to know?


 at the Conversation: “In 2013, a man named Eric L. Loomis was sentenced for eluding police and driving a car without the owner’s consent.

When the judge weighed Loomis’ sentence, he considered an array of evidence, including the results of an automated risk assessment tool called COMPAS. Loomis’ COMPAS score indicated he was at a “high risk” of committing new crimes. Considering this prediction, the judge sentenced him to seven years.

Loomis challenged his sentence, arguing it was unfair to use the data-driven score against him. The U.S. Supreme Court now must consider whether to hear his case – and perhaps settle a nationwide debate over whether it’s appropriate for any court to use these tools when sentencing criminals.

Today, judges across the U.S. use risk assessment tools like COMPAS in sentencing decisions. In at least 10 states, these tools are a formal part of the sentencing process. Elsewhere, judges informally refer to them for guidance.

I have studied the legal and scientific bases for risk assessments. The more I investigate the tools, the more my caution about them grows.

The scientific reality is that these risk assessment tools cannot do what advocates claim. The algorithms cannot actually make predictions about future risk for the individual defendants being sentenced….

Algorithms such as COMPAS cannot make predictions about individual defendants, because data-driven risk tools are based on group statistics. This creates an issue that academics sometimes call the “group-to-individual” or G2i problem.

Scientists study groups. But the law sentences the individual. Consider the disconnect between science and the law here.

The algorithms in risk assessment tools commonly assign specific points to different factors. The points are totaled. The total is then often translated to a risk bin, such as low or high risk. Typically, more points means a higher risk of recidivism.

Say a score of 6 points out of 10 on a certain tool is considered “high risk.” In the historical groups studied, perhaps 50 percent of people with a score of 6 points did reoffend.

Thus, one might be inclined to think that a new offender who also scores 6 points is at a 50 percent risk of reoffending. But that would be incorrect.

It may be the case that half of those with a score of 6 in the historical groups studied would later reoffend. However, the tool is unable to select which of the offenders with 6 points will reoffend and which will go on to lead productive lives.

The studies of factors associated with reoffending are not causation studies. They can tell only which factors are correlated with new crimes. Individuals retain some measure of free will to decide to break the law again, or not.

These issues may explain why risk tools often have significant false positive rates. The predictions made by the most popular risk tools for violence and sex offending have been shown to get it wrong for some groups over 50 percent of the time.

A ProPublica investigation found that COMPAS, the tool used in Loomis’ case, is burdened by large error rates. For example, COMPAS failed to predict reoffending in one study at a 37 percent rate. The company that makes COMPAS has disputed the study’s methodology….

There are also a host of thorny issues with risk assessment tools incorporating, either directly or indirectly, sociodemographic variables, such as gender, race and social class. Law professor Anupam Chander has named it the problem of the “racist algorithm.”

Big data may have its allure. But, data-driven tools cannot make the individual predictions that sentencing decisions require. The Supreme Court might helpfully opine on these legal and scientific issues by deciding to hear the Loomis case…(More)”.

Mapping the invisible: Street View cars add air pollution sensors


Environment at Google: “There are 1.3 million miles of natural gas distribution pipelines in the U.S. These pipelines exist pretty much everywhere that people do, and when they leak, the escaping methane — the main ingredient in natural gas — is a potent greenhouse gas, with 84 times the short-term warming effect of carbon dioxide. These leaks can be time-consuming to identify and measure using existing technologies. Utilities are required by law to quickly fix any leaks that are deemed a safety threat, but thousands of others can — and often do — go on leaking for months or years.

To help gas utilities, regulators, and others understand the scale of the challenge and help prioritize the most cost-effective solutions, the Environmental Defense Fund (EDF) worked with Joe von Fischer, a scientist at Colorado State University, to develop technology to detect and measure methane concentrations from a moving vehicle. Initial tests were promising, and EDF decided to expand the effort to more locations.

That’s when the organization reached out to Google. The project needed to scale, and we had the infrastructure to make it happen: computing power, secure data storage, and, most important, a fleet of Street View cars. These vehicles, equipped with high-precision GPS, were already driving around pretty much everywhere, capturing 360-degree photos for Google Maps; maybe they could measure methane while they were at it. The hypothesis, says Karin Tuxen-Bettman of Google Earth Outreach, was that “we had the potential to turn our Street View fleet into an environmental sensing platform.”

Street View cars make at least 2 trips around a given area in order to capture good air quality data. An intake tube on the front bumper collects air samples, which are then processed by a methane analyzer in the trunk. Finally, the data is sent to the Google Cloud for analysis and integration into a map showing the size and location of methane leaks. Since the trial began in 2012, EDF has built methane maps for 11 cities and found more than 5,500 leaks. The results range from one leak for every mile driven (sorry, Bostonians) to one every 200 miles (congrats, Indianapolis, for replacing all those corrosive steel and iron pipes with plastic).

All of us can go on our smartphone and get the weather. But what if you could scroll down and see what the air quality is on the street where you’re walking?…

This promising start inspired the team to take the next step and explore using Street View cars to measure overall air quality. For years, Google has worked on measuring indoor environmental quality across company offices with Aclima, which builds environmental sensor networks. In 2014, we expanded the partnership to the outside world, equipping several more Street View cars with its ‘Environmental Intelligence’ (Ei) mobile platform, including scientific-grade analyzers and arrays of small-scale, low-cost sensors to measure pollutants, including particulate matter, NO2, CO2 black carbon, and more. The new project began with a pilot in Denver, and we’ll finish mapping cities in 3 regions of California by the end of 2016. And today the system is delivering reliable data that corresponds to the U.S. Environmental Protection Agency’s stationary measurement network….

The project began with a few cars, but Aclima’s mobile platform, which has already produced one of the world’s largest data sets on air quality, could also be expanded via deployment on vehicles like buses and mail trucks, on the way to creating a street-level pollution map. This hyper-local data could help people make more informed choices about things like when to let their kids play outside and which changes to advocate for to make their communities healthier….(More)”.

UK government watchdog examining political use of data analytics


“Given the big data revolution, it is understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes,” Elizabeth Denham, the information commissioner, writes on the ICO’s blog. However, “the public have the right to expect” that this takes place in accordance with existing data protection laws, she adds.

Political parties are able to use Facebook to target voters with different messages, tailoring the advert to recipients based on their demographic. In the 2015 UK general election, the Conservative party spent £1.2 million on Facebook campaigns and the Labour party £16,000. It is expected that Labour will vastly increase that spend for the general election on 8 June….

Political parties and third-party companies are allowed to collect data from sites like Facebook and Twitter that lets them tailor these ads to broadly target different demographics. However, if those ads target identifiable individuals, it runs afoul of the law….(More)”

How to increase public support for policy: understanding citizens’ perspectives


Peter van Wijck and Bert Niemeijer at LSE Blog: “To increase public support, it is essential to anticipate what reactions they will have to policy. But how to do that? Our framework combines insights from scenario planning and frame analysis. Scenario planning starts from the premise that we cannot predict the future. We can, however, imagine different plausible scenarios, different plausible future developments. Scenarios can be used to ask a ‘what if’ question. If a certain scenario were to develop, what policy measures would be required?  By the same token, scenarios may be used as test-conditions for policy-measures. Kees van der Heijden calls this ‘wind tunnelling’.

Frame-analysis is about how we interpret the world around us. Frames are mental structures that shape the way we see the world. Based on a frame, an individual perceives societal problems, attributes these problems to causes, and forms ideas on instruments to address the problems. Our central idea is that policy-makers may use citizens’ frames to reflect on their policy frame. Citizens’ frames may, in other words, be used to test conditions in a wind tunnel. The line of reasoning is summarized in the figure.

Policy frames versus citizens’ frames

policy framinng

The starting-points of the figure are the policy frame and the citizens’ frames. Arrow 1 and 2 indicate that citizens’ reactions depend on both frames. A citizen can be expected to respond positively in case of frame alignment. Negative responses can be expected if policy-makers do not address “the real problems”, do not attribute problems to “the real causes”, or do not select “adequate instruments”. If frames do not align, policy-makers are faced with the question of how to deal with it (arrow 3). First, they may reconsider the policy frame (arrow 4). That is, are there reasons to reconsider the definition of problems, the attribution to causes, and/or the selection of instruments? Such a “reframing” effectively amounts to the formulation of a new (or adjusted) policy-frame. Second, policy-makers may try to influence citizens’ frames (arrow 5). This may lead to a change in what citizens define as problems, what they consider to be the causes of problems and what they consider to be adequate instruments to deal with the problems.

Two cases: support for victims and confidence in the judiciary

To apply our framework in practice, we developed a three-step method. Firstly, we reconstruct the policy frame. Here we investigate what policy-makers see as social problems, what they assume to be the causes of these problems, and what they consider to be appropriate instruments to address these problems. Secondly, we reconstruct contrasting citizens’ frames. Here we use focus groups, where contrasting groups are selected based on a segmentation model. Finally, we engage in a “wind tunnelling exercise”. We present the citizens’ frames to policy-makers. And we ask them to reflect on the question of how the different groups can be expected to react on the policy measures selected by the policy-makers. In fact, this step is what Schön and Rein called “frame reflection”….(More)”.

Dubai Data Releases Findings of ‘The Dubai Data Economic Impact Report’


Press Release: “the ‘Dubai Data Economic Impact Report’…provides the Dubai Government with insights into the potential economic impacts of opening and sharing data and includes a methodology for more rigorous measurement of the economic impacts of open and shared data, to allow regular assessment of the actual impacts in the future.

The study estimates that the opening and sharing of government and private sector data will potentially add a total of 10.4 billion AED Gross Value Added (GVA) impact to Dubai’s economy annually by 2021. Opening government data alone will result in a GVA impact of 6.6 billion AED annually as of 2021. This is equivalent to approximately 0.8% to 1.2% of Dubai’s forecasted GDP for 2021. Transport, storage, and communications are set to be the highest contributor to this potential GVA of opening government data, accounting for (27.8% or AED1.85 bn) of the total amount, followed by public administration (23.6% or AED 1.57 bn); wholesale, retail, restaurants, and hotels (13.7% or AED 908 million); real estate (9.6% or AED 639 million); and professional services (8.9% or AED 588 million). Finance and insurance, meanwhile, is calculated to make up 6.5% (AED 433 million) of the GVA, while mining, manufacturing, and utilities (6% or AED 395 million); construction (3.5% or AED 230 million); and entertainment and arts (0.4% or AED27 million) account for the remaining proportion.

This economic impact will be realized through the publication, exchange, use and reuse of Dubai data. The Dubai Data Law of 2015 mandates that data providers publish open data and exchange shared data. It defines open data as any Dubai data which is published and can be downloaded, used and re-used without restrictions by all types of users, while shared data is the data that has been classified as either confidential, sensitive, or secret, and can only be accessed by other government entities or by other authorised persons. The law pertains to local government entities, federal government entities which have any data relating to the emirate, individuals and companies who produce, own, disseminate, or exchange any data relating to the emirate. It aims to realise Dubai’s vision of transforming itself into a smart city, manage Dubai Data in accordance with a clear and specific methodology that is consistent with international best practices, integrate the services provided by federal and local government entities, and optimise the use of the data available to data providers, among other objectives….

The study identifies several stakeholders  involved in the use and reuse of open and shared data. These stakeholders – some of whom are qualified as “data creators” – play an important role in the process of generating the economic impacts. They include: data enrichers, who combine open data with their own sources and/or knowledge; data enablers, who do not profit directly from the data, but do so via the platforms and technologies they are provided on; data developers, who design and build Application Programming Interfaces (APIs); and data aggregators, who collect and pool data, providing it to other stakeholders….(More)”

Updated N.Y.P.D. Anti-Crime System to Ask: ‘How We Doing?’


It was a policing invention with a futuristic sounding name — CompStat — when the New York Police Department introduced it as a management system for fighting crime in an era of much higher violence in the 1990s. Police departments around the country, and the world, adapted its system of mapping muggings, robberies and other crimes; measuring police activity; and holding local commanders accountable.

Now, a quarter-century later, it is getting a broad reimagining and being brought into the mobile age. Moving away from simple stats and figures, CompStat is getting touchy-feely. It’s going to ask New Yorkers — via thousands of questions on their phones — “How are you feeling?” and “How are we, the police, doing?”

Whether this new approach will be mimicked elsewhere is still unknown, but as is the case with almost all new tactics in the N.Y.P.D. — the largest municipal police force in the United States by far — it will be closely watched. Nor is it clear if New Yorkers will embrace this approach, reject it as intrusive or simply be annoyed by it.

The system, using location technology, sends out short sets of questions to smartphones along three themes: Do you feel safe in your neighborhood? Do you trust the police? Are you confident in the New York Police Department?

The questions stream out every day, around the clock, on 50,000 different smartphone applications and present themselves on screens as eight-second surveys.

The department believes it will get a more diverse measure of community satisfaction, and allow it to further drive down crime. For now, Police Commissioner James P. O’Neill is calling the tool a “sentiment meter,” though he is open to suggestions for a better name….(More)”.