Information Sharing as a Dimension of Smartness: Understanding Benefits and Challenges in Two Megacities


Paper by J. Ramon Gil-Garcia, Theresa A. Pardo, and Manuel De Tuya: “Cities around the world are facing increasingly complex problems.

These problems frequently require collaboration and information sharing across agency boundaries.

In our view, information sharing can be seen as an important dimension of what is recently being called smartness in cities and enables the ability to improve decision making and day-to-day operations in urban settings. Unfortunately, what many city managers are learning is that there are important challenges to sharing information both within their city and with others.

Based on nonemergency service integration initiatives in New York City and Mexico City, this article examines important benefits from and challenges to information sharing in the context of what the participants characterize as smart city initiatives, particularly in large metropolitan areas.

The research question guiding this study is as follows: To what extent do previous findings about information sharing hold in the context of city initiatives, particularly in megacities?

The results provide evidence on the importance of some specific characteristics of cities and megalopolises and how they affect benefits and challenges of information sharing. For instance, cities seem to have more managerial flexibility than other jurisdictions such as state governments.

In addition, megalopolises have most of the necessary technical skills and financial resources needed for information sharing and, therefore, these challenges are not as relevant as in other local governments….(More)”.

Applying crowdsourcing techniques in urban planning: A bibliometric analysis of research and practice prospects


Paper by Pinchao Liao et al in Cities: “Urban planning requires more public involvement and larger group participation to achieve scientific and democratic decision making. Crowdsourcing is a novel approach to gathering information, encouraging innovation and facilitating group decision-making. Unfortunately, although previous research has explored the utility of crowdsourcing applied to urban planning theoretically, there are still rare real practices or empirical studies using practical data. This study aims to identify the prospects for implementing crowdsourcing in urban planning through a bibliometric analysis on current research.

First, database and keyword lists based on peer-reviewed journal articles were developed. Second, semantic analysis is applied to quantify co-occurrence frequencies of various terms in the articles based on the keyword lists, and in turn a semantic network is built.

Then, cluster analysis was conducted to identify major and correlated research topics, and bursting key terms were analyzed and explained chronologically. Lastly, future research and practical trends were discussed.

The major contribution of this study is identifying crowdsourcing as a novel urban planning method, which can strengthen government capacities by involving public participation, i.e., turning governments into task givers. Regarding future patterns, the application of crowdsourcing in urban planning is expected to expand to transportation, public health and environmental issues. It is also indicated that the use of crowdsourcing requires governments to adjust urban planning mechanisms….(More)”.

Techno-optimism and policy-pessimism in the public sector big data debate


Paper by Simon Vydra and Bram Klievink: “Despite great potential, high hopes and big promises, the actual impact of big data on the public sector is not always as transformative as the literature would suggest. In this paper, we ascribe this predicament to an overly strong emphasis the current literature places on technical-rational factors at the expense of political decision-making factors. We express these two different emphases as two archetypical narratives and use those to illustrate that some political decision-making factors should be taken seriously by critiquing some of the core ‘techno-optimist’ tenets from a more ‘policy-pessimist’ angle.

In the conclusion we have these two narratives meet ‘eye-to-eye’, facilitating a more systematized interrogation of big data promises and shortcomings in further research, paying appropriate attention to both technical-rational and political decision-making factors. We finish by offering a realist rejoinder of these two narratives, allowing for more context-specific scrutiny and balancing both technical-rational and political decision-making concerns, resulting in more realistic expectations about using big data for policymaking in practice….(More)”.

From Theory to Practice : Open Government Data, Accountability, and Service Delivery


Report by Michael Christopher Jelenic: “Open data and open government data have recently attracted much attention as a means to innovate, add value, and improve outcomes in a variety of sectors, public and private. Although some of the benefits of open data initiatives have been assessed in the past, particularly their economic and financial returns, it is often more difficult to evaluate their social and political impacts. In the public sector, a murky theory of change has emerged that links the use of open government data with greater government accountability as well as improved service delivery in key sectors, including health and education, among others. In the absence of cross-country empirical research on this topic, this paper asks the following: Based on the evidence available, to what extent and for what reasons is the use of open government data associated with higher levels of accountability and improved service delivery in developing countries?

To answer this question, the paper constructs a unique data set that operationalizes open government data, government accountability, service delivery, as well as other intervening and control variables. Relying on data from 25 countries in Sub-Saharan Africa, the paper finds a number of significant associations between open government data, accountability, and service delivery. However, the findings suggest differentiated effects of open government data across the health and education sectors, as well as with respect to service provision and service delivery outcomes. Although this early research has limitations and does not attempt to establish a purely causal relationship between the variables, it provides initial empirical support for claims about the efficacy of open government data for improving accountability and service delivery….(More)”

Inaccurate Statistical Discrimination


NBER paper by J. Aislinn Bohren, Kareem Haggag, Alex Imas, Devin G. Pope: “Discrimination has been widely studied in economics and other disciplines. In addition to identifying evidence of discrimination, economists often categorize the source of discrimination as either taste-based or statistical. Categorizing discrimination in this way can be valuable for policy design and welfare analysis. We argue that a further categorization is important and needed. Specifically, in many situations economic agents may have inaccurate beliefs about the expected productivity or performance of a social group. This motivates our proposed distinction between accurate (based on correct beliefs) and inaccurate (based on incorrect beliefs) statistical discrimination. We do a thorough review of the discrimination literature and argue that this distinction is rarely discussed. Using an online experiment, we illustrate how to identify accurate versus inaccurate statistical discrimination. We show that ignoring this distinction – as is often the case in the discrimination literature – can lead to erroneous interpretations of the motives and implications of discriminatory behavior. In particular, when not explicitly accounted for, inaccurate statistical discrimination can be mistaken for taste-based discrimination, accurate statistical discrimination, or a combination of the two….(More)”.

Can we nudge farmers into saving water? Evidence from a randomised experiment


Paper by Sylvain Chabé-Ferret, Philippe Le Coent, Arnaud Reynaud, Julie Subervie and Daniel Lepercq: “We test whether social comparison nudges can promote water-saving behaviour among farmers as a complement to traditional CAP measures. We conducted a randomised controlled trial among 200 farmers equipped with irrigation smart meters in South-West France. Treated farmers received weekly information on individual and group water consumption over four months. Our results rule out medium to large effect-sizes of the nudge. Moreover, they suggest that the nudge was effective at reducing the consumption of those who irrigate the most, although it appears to have reduced the proportion of those who do not consume water at all….(More)”.

The Geopolitics of Information


Paper by Eric Rosenbach and Katherine Mansted: “Information is now the world’s most consequential and contested geopolitical resource. The world’s most profitable businesses have asserted for years that data is the “new oil.” Political campaigns—and foreign intelligence operatives—have shown over the past two American presidential elections that data-driven social media is the key to public opinion. Leading scientists and technologists understand that good datasets, not just algorithms, will give them a competitive edge.

Data-driven innovation is not only disrupting economies and societies; it is reshaping relations between nations. The pursuit of information power—involving states’ ability to use information to influence, decide, create and communicate—is causing states to rewrite their terms of engagement with markets and citizens, and to redefine national interests and strategic priorities. In short, information power is altering the nature and behavior of the fundamental building block of international relations, the state, with potentially seismic consequences.

Authoritarian governments recognize the strategic importance of information and over the past five years have operationalized powerful domestic and international information strategies. They are cauterizing their domestic information environments and shutting off their citizens from global information flows, while weaponizing information to attack and destabilize democracies. In particular, China and Russia believe that strategic competition in the 21st century is characterized by a zero-sum contest for control of data, as well as the technology and talent needed to convert data into useful information.

Democracies remain fundamentally unprepared for strategic competition in the Information Age. For the United States in particular, as the importance of information as a geopolitical resource has waxed, its information dominance has waned. Since the end of the Cold War, America’s supremacy in information technologies seemed unassailable—not least because of its central role in creating the Internet and overall economic primacy. Democracies have also considered any type of information strategy to be largely unneeded: government involvement in the domestic information environment feels Orwellian, while democracies believed that their “inherently benign” foreign policy didn’t need extensive influence operations.

However, to compete and thrive in the 21st century, democracies, and the United States in particular, must develop new national security and economic strategies that address the geopolitics of information. In the 20th century, market capitalist democracies geared infrastructure, energy, trade, and even social policy to protect and advance that era’s key source of power—manufacturing. In this century, democracies must better account for information geopolitics across all dimensions of domestic policy and national strategy….(More)”.

So­cial me­dia data re­veal where vis­it­ors to nature loca­tions provide po­ten­tial be­ne­fits or threats to biodiversity


University of Helsinki: “In a new article published in the journal Science of the Total Environment, a team of researchers assessed global patterns of visitation rates, attractiveness and pressure to more than 12,000 Important Bird and Biodiversity Areas (IBAs), which are sites of international significance for nature conservation, by using geolocated data mined from social media (Twitter and Flickr).

The study found that Important Bird and Biodiversity Areas located in Europe and Asia, and in temperate biomes, had the highest density of social media users. Results also showed that sites of importance for congregatory species, which were also more accessible, more densely populated and provided more tourism facilities, received higher visitation than did sites richer in bird species.

 “Resources in biodiversity conservation are woefully inadequate and novel data sources from social media provide openly available user-generated information about human-nature interactions, at an unprecedented spatio-temporal scale”, says Dr Anna Hausmann from the University of Helsinki, a conservation scientist leading the study. “Our group has been exploring and validating data retrieved from social media to understand people´s preferences for experiencing nature in national parks at a local, national and continental scale”, she continues, “in this study, we expand our analyses at a global level”. …

“Social media content and metadata contain useful information for understanding human-nature interactions in space and time”, says Prof. Tuuli Toivonen, another co-author in the paper and the leader of the Digital Geography Lab at the University of Helsinki. “Social media data can also be used to cross-validate and enrich data collected by conservation organizations”, she continues. The study found that the 17 percent of all Important Bird and Biodiversity Areas (IBA) that were assessed by experts to be under greater human disturbance also had higher density of social media users….(More)”.

Open Data and the Private Sector


Chapter by Joel Gurin, Carla Bonini and Stefaan Verhulst in State of Open Data: “The open data movement launched a decade ago with a focus on transparency, good governance, and citizen participation. As other chapters in this collection have documented in detail, those critical uses of open data have remained paramount and are continuing to grow in importance at a time of fake news and increased secrecy. But the value of open data extends beyond transparency and accountability – open data is also an important resource for business and economic growth.

The past several years have seen an increased focus on the value of open data to the private sector. In 2012, the Open Data Institute (ODI) was founded in the United Kingdom (UK) and backed with GBP 10 million by the UK government to maximise the value of open data in business and government. A year later, McKinsey released a report suggesting open data could help unlock USD 3 to 5 trillion in economic value annually. At around the same time, Monsanto acquired the Climate Corporation, a digital agriculture company that leverages open data to inform farmers for approximately USD 1.1 billion. In 2014, the GovLab launched the Open Data 500,2the first national study of businesses using open government data (now in six countries), and, in 2015, Open Data for Development (OD4D) launched the Open Data Impact Map, which today contains more than 1 100 examples of private sector companies using open data. The potential business applications of open data continue to be a priority for many governments around the world as they plan and develop their data programmes.

The use of open data has become part of the broader business practice of using data and data science to inform business decisions, ranging from launching new products and services to optimising processes and outsmarting the competition. In this chapter, we take stock of the state of open data and the private sector by analysing how the private sector both leverages and contributes to the open data ecosystem….(More)”.

Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law


Paper by Chelsea Barabas: “Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly proliferating across the US criminal justice system as a means of predicting and managing the risk of crime and addressing accusations of discriminatory practices. These data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. This essay examines contemporary debates regarding the use of “artificial intelligence” as a vehicle for criminal justice reform, by closely examining two general approaches to, what has been widely branded as, “algorithmic fairness” in criminal law: 1) the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions and 2) the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency and validity in these systems.

The essay argues that attempts to render AI-branded tools more accurate by addressing narrow notions of “bias,” miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The article concludes by calling for an abolitionist understanding of the role and function of the carceral state, in order to fundamentally reformulate the questions we ask, the way we characterize existing data, and how we identify and fill gaps in existing data regimes of the carceral state….(More)”