The State of Open Data Policy Repository


The State of Open Data Policy Repository is a collection of recent policy developments surrounding open data, data reuse, and data collaboration around the world. 

A refinement of compilation of policies launched at the Open Data Policy Summit last year, the State of Open Data Policy Online Repository is an interactive resource that looks at recent legislation, directives, and proposals that affect open data and data collaboration all around the world. It captures what kinds of data collaboration issues policymakers are currently focused on and where the momentum for data innovation is heading in countries around the world.

Users can filter policies according to region, country, focus, and type of data sharing. The review currently surfaced approximately 60 examples of recent legislative acts, proposals, directives, and other policy documents, from which the Open Data Policy Lab draws findings about the need to promote more innovative policy frameworks.

This collection shows that, despite increased interest in the third wave conception of open data, policy development remains nascent. It is primarily concerned with open data repositories at the expense of alternative forms of collaboration. Most policies listed focus on releasing government data and, elsewhere, most nations still don’t have open data rules or a method to put the policies in place. 

This work reveals a pressing need for institutions to create frameworks that can direct data professionals since there are worries that inaction may both allow for misuse of data and lead to missed chances to use data…(More)”.

Computational Social Science for the Public Good: Towards a Taxonomy of Governance and Policy Challenges


Chapter by Stefaan G. Verhulst: “Computational Social Science (CSS) has grown exponentially as the process of datafication and computation has increased. This expansion, however, is yet to translate into effective actions to strengthen public good in the form of policy insights and interventions. This chapter presents 20 limiting factors in how data is accessed and analysed in the field of CSS. The challenges are grouped into the following six categories based on their area of direct impact: Data Ecosystem, Data Governance, Research Design, Computational Structures and Processes, the Scientific Ecosystem, and Societal Impact. Through this chapter, we seek to construct a taxonomy of CSS governance and policy challenges. By first identifying the problems, we can then move to effectively address them through research, funding, and governance agendas that drive stronger outcomes…(More)”. Full Book: Handbook of Computational Social Science for Policy

Automating Immigration and Asylum: The Uses of New Technologies in Migration and Asylum Governance in Europe


Report by Derya Ozkul: “The EU’s Artificial Intelligence Act proposal categorises AI uses for immigration, asylum and border as high risk, but new technologies are already used in many aspects of migration and asylum ‘management’ beyond imagination. To be able to reflect on the AI Act proposal, we first need to understand what current uses are, but this information is not always publicly available.

The new report by the Algorithmic Fairness for Asylum Seekers and Refugees (AFAR) project shows the multitude of uses of new technologies across Europe at the national and the EU levels. In particular, the report explores in detail the use of forecasting tools, risk assessment and triaging systems, processing of short- and long-term residency and citizenship applications, document verification, speech and dialect recognition, distribution of welfare benefits, matching tools, mobile phone data extraction and electronic monitoring, across Europe. It highlights the need for transparency and thorough training of decision-makers, as well as the inclusion of migrants’ interests in the design, decision, and implementation stages…(More)”.

The Health of Democracies During the Pandemic: Results from a Randomized Survey Experiment


Paper by Marcella Alsan et al: “Concerns have been raised about the “demise of democracy”, possibly accelerated by pandemic-related restrictions. Using a survey experiment involving 8,206 respondents from five Western democracies, we find that subjects randomly exposed to information regarding civil liberties infringements undertaken by China and South Korea to contain COVID-19 became less willing to sacrifice rights and more worried about their long-term-erosion. However, our treatment did not increase support for democratic procedures more generally, despite our prior evidence that pandemic-related health risks diminished such support. These results suggest that the start of the COVID-19 crisis was a particularly vulnerable time for democracies…(More)”.

Global Renewables Watch


About: “The Global Renewables Watch is a first-of-its-kind living atlas intended to map and measure all utility-scale solar and wind installations on Earth using artificial intelligence (AI) and satellite imagery, allowing users to evaluate clean energy transition progress and track trends over time. It also provides unique spatial data on land use trends to help achieve the dual aims of the environmental protection and increasing renewable energy capacity….(More)”

The Smartness Mandate


Book by Orit Halpern and Robert Mitchell: “Smart phones. Smart cars. Smart homes. Smart cities. The imperative to make our world ever smarter in the face of increasingly complex challenges raises several questions: What is this “smartness mandate”? How has it emerged, and what does it say about our evolving way of understanding—and managing—reality? How have we come to see the planet and its denizens first and foremost as data-collecting instruments?

In The Smartness Mandate, Orit Halpern and Robert Mitchell radically suggest that “smartness” is not primarily a technology, but rather an epistemology. Through this lens, they offer a critical exploration of the practices, technologies, and subjects that such an understanding relies upon—above all, artificial intelligence and machine learning. The authors approach these not simply as techniques for solving problems of calculations, but rather as modes of managing life (human and other) in terms of neo-Darwinian evolution, distributed intelligences, and “resilience,” all of which have serious implications for society, politics, and the environment.

The smartness mandate constitutes a new form of planetary governance, and Halpern and Mitchell aim to map the logic of this seemingly inexorable and now naturalized demand to compute, illuminate the genealogy of how we arrived here, and point to alternative imaginaries of the possibilities and potentials of smart technologies and infrastructures…(More)”.

Data Is What Data Does: Regulating Use, Harm, and Risk Instead of Sensitive Data


Paper by Daniel J. Solove: “Heightened protection for sensitive data is becoming quite trendy in privacy laws around the world. Originating in European Union (EU) data protection law and included in the EU’s General Data Protection Regulation (GDPR), sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories of sensitive data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, biometric data, and genetic data.

Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use non-sensitive data as a proxy for certain types of sensitive data.

Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. With Big Data and powerful machine learning algorithms, most non-sensitive data can give rise to inferences about sensitive data. In many privacy laws, data that can give rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.

This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive — as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake — they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm.

To be effective, privacy law must focus on use, harm, and risk rather than on the nature of personal data. The implications of this point extend far beyond sensitive data provisions. In many elements of privacy laws, protections should be based on the use of personal data and proportionate to the harm and risk involved with those uses…(More)”.

The Doctor Who Wasn’t There: Technology, History, and the Limits of Telehealth


Book by Jeremy A. Greene: “The Doctor Who Wasn’t There traces the long arc of enthusiasm for—and skepticism of—electronic media in health and medicine. Over the past century, a series of new technologies promised to democratize access to healthcare. From the humble telephone to the connected smartphone, from FM radio to wireless wearables, from cable television to the “electronic brains” of networked mainframe computers: each new platform has promised a radical reformation of the healthcare landscape. With equal attention to the history of technology, the history of medicine, and the politics and economies of American healthcare, physician and historian Jeremy A. Greene explores the role that electronic media play, for better and for worse, in the past, present, and future of our health.

Today’s telehealth devices are far more sophisticated than the hook-and-ringer telephones of the 1920s, the radios that broadcasted health data in the 1940s, the closed-circuit televisions that enabled telemedicine in the 1950s, or the online systems that created electronic medical records in the 1960s. But the ethical, economic, and logistical concerns they raise are prefigured in the past, as are the gaps between what was promised and what was delivered. Each of these platforms also produced subtle transformations in health and healthcare that we have learned to forget, displaced by promises of ever newer forms of communication that took their place. 

Illuminating the social and technical contexts in which electronic medicine has been conceived and put into practice, Greene’s history shows the urgent stakes, then and now, for those who would seek in new media the means to build a more equitable future for American healthcare….(More)”.

Who owns the map? Data sovereignty and government spatial data collection, use, and dissemination


Paper by Peter A. Johnson and Teresa Scassa: “Maps, created through the collection, assembly, and analysis of spatial data are used to support government planning and decision-making. Traditionally, spatial data used to create maps are collected, controlled, and disseminated by government, although over time, this role has shifted. This shift has been driven by the availability of alternate sources of data collected by private sector companies, and data contributed by volunteers to open mapping platforms, such as OpenStreetMap. In theorizing this shift, we provide examples of how governments use data sovereignty as a tool to shape spatial data collection, use, and sharing. We frame four models of how governments may navigate shifting spatial data sovereignty regimes; first, with government retaining complete control over data collection; second, with government contracting a third party to provide specific data collection services, but with data ownership and dissemination responsibilities resting with government; third, with government purchasing data under terms of access set by third party data collectors, who disseminate data to several parties, and finally, with government retreating from or relinquishing data sovereignty altogether. Within this rapidly changing landscape of data providers, we propose that governments must consider how to address data sovereignty concerns to retain their ability to control data use in the public interest…(More)”.

How Smart Are the Robots Getting?


Cade Metz at The New York Times: “…These are not systems that anyone can properly evaluate with the Turing test — or any other simple method. Their end goal is not conversation.

Researchers at Google and DeepMind, which is owned by Google’s parent company, are developing tests meant to evaluate chatbots and systems like DALL-E, to judge what they do well, where they lack reason and common sense, and more. One test shows videos to artificial intelligence systems and asks them to explain what has happened. After watching someone tinker with an electric shaver, for instance, the A.I. must explain why the shaver did not turn on.

These tests feel like academic exercises — much like the Turing test. We need something that is more practical, that can really tell us what these systems do well and what they cannot, how they will replace human labor in the near term and how they will not.

We could also use a change in attitude. “We need a paradigm shift — where we no longer judge intelligence by comparing machines to human behavior,” said Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a prominent lab in Seattle….

At the same time, there are many ways these bots are superior to you and me. They do not get tired. They do not let emotion cloud what they are trying to do. They can instantly draw on far larger amounts of information. And they can generate text, images and other media at speeds and volumes we humans never could.

Their skills will also improve considerably in the coming years.

Researchers can rapidly hone these systems by feeding them more and more data. The most advanced systems, like ChatGPT, require months of training, but over those months, they can develop skills they did not exhibit in the past.

“We have found a set of techniques that scale effortlessly,” said Raia Hadsell, senior director of research and robotics at DeepMind. “We have a simple, powerful approach that continues to get better and better.”

The exponential improvement we have seen in these chatbots over the past few years will not last forever. The gains may soon level out. But even then, multimodal systems will continue to improve — and master increasingly complex skills involving images, sounds and computer code. And computer scientists will combine these bots with systems that can do things they cannot. ChatGPT failed Turing’s chess test. But we knew in 1997 that a computer could beat the best humans at chess. Plug ChatGPT into a chess program, and the hole is filled.

In the months and years to come, these bots will help you find information on the internet. They will explain concepts in ways you can understand. If you like, they will even write your tweets, blog posts and term papers.

They will tabulate your monthly expenses in your spreadsheets. They will visit real estate websites and find houses in your price range. They will produce online avatars that look and sound like humans. They will make mini-movies, complete with music and dialogue…

Certainly, these bots will change the world. But the onus is on you to be wary of what these systems say and do, to edit what they give you, to approach everything you see online with skepticism. Researchers know how to give these systems a wide range of skills, but they do not yet know how to give them reason or common sense or a sense of truth.

That still lies with you…(More)”.