Building Trust for Inter-Organizational Data Sharing: The Case of the MLDE


Paper by Heather McKay, Sara Haviland, and Suzanne Michael: “There is increasing interest in sharing data across agencies and even between states that was once siloed in separate agencies. Driving this is a need to better understand how people experience education and work, and their pathways through each. A data-sharing approach offers many possible advantages, allowing states to leverage pre-existing data systems to conduct increasingly sophisticated and complete analyses. However, information sharing across state organizations presents a series of complex challenges, one of which is the central role trust plays in building successful data-sharing systems. Trust building between organizations is therefore crucial to ensuring project success.

This brief examines the process of building trust within the context of the development and implementation of the Multistate Longitudinal Data Exchange (MLDE). The brief is based on research and evaluation activities conducted by Rutgers’ Education & Employment Research Center (EERC) over the past five years, which included 40 interviews with state leaders and the Western Interstate Commission for Higher Education (WICHE) staff, observations of user group meetings, surveys, and MLDE document analysis. It is one in a series of MLDE briefs developed by EERC….(More)”.

Leveraging Telecom Data to Aid Humanitarian Efforts


Data Collaborative Case Study by Michelle Winowatan, Andrew J. Zahuranec, Andrew Young, and Stefaan Verhulst: “Following the 2015 earthquake in Nepal, Flowminder, a data analytics nonprofit, and NCell, a mobile operator in Nepal, formed a data collaborative. Using call detail records (CDR, a type of mobile operator data) provided by NCell, Flowminder estimated the number of people displaced by the earthquake and their location. The result of the analysis was provided to various humanitarian agencies responding to the crisis in Nepal to make humanitarian aid delivery more efficient and targeted.

Data Collaboratives Model: Based on our typology of data collaborative practice areas, the initiative follows the trusted intermediary model of data collaboration, specifically a third-party analytics approach. Third-party analytics projects involve trusted intermediaries — such as Flowminder — who access private-sector data, conduct targeted analysis, and share insights with public or civil sector partners without sharing the underlying data. This approach enables public interest uses of private-sector data while retaining strict access control. It brings outside data expertise that would likely not be available otherwise using direct bilateral collaboration between data holders and users….(More)”.

Improving data access democratizes and diversifies science


Research article by Abhishek Nagaraj, Esther Shears, and Mathijs de Vaan: “Data access is critical to empirical research, but past work on open access is largely restricted to the life sciences and has not directly analyzed the impact of data access restrictions. We analyze the impact of improved data access on the quantity, quality, and diversity of scientific research. We focus on the effects of a shift in the accessibility of satellite imagery data from Landsat, a NASA program that provides valuable remote-sensing data. Our results suggest that improved access to scientific data can lead to a large increase in the quantity and quality of scientific research. Further, better data access disproportionately enables the entry of scientists with fewer resources, and it promotes diversity of scientific research….(More)”

TraceTogether


Case Notes by Mitchell B. Weiss and Sarah Mehta: “By April 7, 2020, over 1.4 million people worldwide had contracted the novel coronavirus (COVID-19). Governments raced to curb the spread of COVID-19 by scaling up testing, quarantining those infected, and tracing their possible contacts. It had taken Singapore’s Government Technology Agency (GovTech) and Ministry of Health (MOH) all of eight weeks to develop the world’s first nationwide deployment of a Bluetooth-based contact tracing system, TraceTogether, and deploy it in an attempt to slow the spread of COVID-19. From late January to mid-March 2020, GovTech’s Jason Bay and his team raced to create a technology that would supplement the work of Singapore’s human contact tracers. Days after its launch, Singapore’s foreign minister announced plans to open source the technology. Now, in early April, TraceTogether was a beta for the world. Whether the system would really help in Singapore, and whether other countries should adopt it was still a wide-open question….(More)”.

Going Beyond the Smart City? Implementing Technopolitical Platforms for Urban Democracy in Madrid and Barcelona


Paper by Adrian Smith & Pedro Prieto Martín: “Digital platforms for urban democracy are analyzed in Madrid and Barcelona. These platforms permit citizens to debate urban issues with other citizens; to propose developments, plans, and policies for city authorities; and to influence how city budgets are spent. Contrasting with neoliberal assumptions about Smart Citizenship, the technopolitics discourse underpinning these developments recognizes that the technologies facilitating participation have themselves to be developed democratically. That is, technopolitical platforms are built and operate as open, commons-based processes for learning, reflection, and adaptation. These features prove vital to platform implementation consistent with aspirations for citizen engagement and activism….(More)”.

Monitoring Corruption: Can Top-down Monitoring Crowd-Out Grassroots Participation?


Paper by Robert M Gonzalez, Matthew Harvey and Foteini Tzachrista: “Empirical evidence on the effectiveness of grassroots monitoring is mixed. This paper proposes a previously unexplored mechanism that may explain this result. We argue that the presence of credible and effective top-down monitoring alternatives can undermine citizen participation in grassroots monitoring efforts. Building on Olken’s (2009) road-building field experiment in Indonesia; we find a large and robust effect of the participation interventions on missing expenditures in villages without an audit in place. However, this effect vanishes as soon as an audit is simultaneously implemented in the village. We find evidence of crowding-out effects: in government audit villages, individuals are less likely to attend, talk, and actively participate in accountability meetings. They are also significantly less likely to voice general problems, corruption-related problems, and to take serious actions to address these problems. Despite policies promoting joint implementation of top-down and bottom-up interventions, this paper shows that top-down monitoring can undermine rather than complement grassroots efforts….(More)”.

The AI Powered State: What can we learn from China’s approach to public sector innovation?


Essay collection edited by Nesta: “China is striding ahead of the rest of the world in terms of its investment in artificial intelligence (AI), rate of experimentation and adoption, and breadth of applications. In 2017, China announced its aim of becoming the world leader in AI technology by 2030. AI innovation is now a key national priority, with central and local government spending on AI estimated to be in the tens of billions of dollars.

While Europe and the US are also following AI strategies designed to transform the public sector, there has been surprisingly little analysis of what practical lessons can be learnt from China’s use of AI in public services. Given China’s rapid progress in this area, it is important for the rest of the world to pay attention to developments in China if it wants to keep pace.

This essay collection finds that examining China’s experience of public sector innovation offers valuable insights for policymakers. Not everything is applicable to a western context – there are social, political and ethical concerns that arise from China’s use of new technologies in public services and governance – but there is still much that can be learned from its experience while also acknowledging what should be criticized and avoided….(More)”.

The Computermen


Podcast Episode by Jill Lepore: “In 1966, just as the foundations of the Internet were being imagined, the federal government considered building a National Data Center. It would be a centralized federal facility to hold computer records from each federal agency, in the same way that the Library of Congress holds books and the National Archives holds manuscripts. Proponents argued that it would help regulate and compile the vast quantities of data the government was collecting. Quickly, though, fears about privacy, government conspiracies, and government ineptitude buried the idea. But now, that National Data Center looks like a missed opportunity to create rules about data and privacy before the Internet took off. And in the absence of government action, corporations have made those rules themselves….(More)”.

Panopticon Reborn: Social Credit as Regulation for the Age of AI


Paper by Kevin Werbach: “Technology scholars, policy-makers, and executives in Europe and the United States disagree violently about what the digitally connected world should look like. They agree on what it shouldn’t: the Orwellian panopticon of China’s Social Credit System (SCS). SCS is a government-led initiative to promote data-driven compliance with law and social values, using databases, analytics, blacklists, and software applications. In the West, it is widely viewed as a diabolical effort to crush any spark of resistance to the dictates of the Chinese Communist Party (CCP) and its corporate emissaries. This picture is, if not wholly incorrect, decidedly incomplete. SCS is the world’s most advanced prototype of a regime of algorithmic regulation. It is a sophisticated and comprehensive effort not only to expand algorithmic control, but also to restrain it. Understanding China’s system is crucial for resolving the great challenges we face in the emerging era of relentless data aggregation, ubiquitous analytics, and algorithmic control….(More)”.

The “Social” Side of Big Data: Teaching BD Analytics to Political Science Students


Case report by Giampiero Giacomello and Oltion Preka: “In an increasingly technology-dependent world, it is not surprising that STEM (Science, Technology, Engineering, and Mathematics) graduates are in high demand. This state of affairs, however, has made the public overlook the case that not only computing and artificial intelligence are naturally interdisciplinary, but that a huge portion of generated data comes from human–computer interactions, thus they are social in character and nature. Hence, social science practitioners should be in demand too, but this does not seem the case. One of the reasons for such a situation is that political and social science departments worldwide tend to remain in their “comfort zone” and see their disciplines quite traditionally, but by doing so they cut themselves off from many positions today. The authors believed that these conditions should and could be changed and thus in a few years created a specifically tailored course for students in Political Science. This paper examines the experience of the last year of such a program, which, after several tweaks and adjustments, is now fully operational. The results and students’ appreciation are quite remarkable. Hence the authors considered the experience was worth sharing, so that colleagues in social and political science departments may feel encouraged to follow and replicate such an example….(More)”