The Co-Creation Compass: From Research to Action.


Policy Brief by Jill Dixon et al: ” Modern public administrations face a wider range of challenges than in the past, from designing effective social services that help vulnerable citizens to regulating data sharing between banks and fintech startups to ensure competition and growth to mainstreaming gender policies effectively across the departments of a large public administration.

These very different goals have one thing in common. To be solved, they require collaboration with other entities – citizens, companies and other public administrations and departments. The buy-in of these entities is the factor determining success or failure in achieving the goals. To help resolve this problem, social scientists, researchers and students of public administration have devised several novel tools, some of which draw heavily on the most advanced management thinking of the last decade.

First and foremost is co-creation – an awkward sounding word for a relatively simple idea: the notion that better services can be designed and delivered by listening to users, by creating feedback loops where their success (or failure) can be studied, by frequently innovating and iterating incremental improvements through small-scale experimentation so they can deliver large-scale learnings and by ultimately involving users themselves in designing the way these services can be made most effective and best be delivered.

Co-creation tools and methods provide a structured manner for involving users, thereby maximising the probability of satisfaction, buy-in and adoption. As such, co-creation is not a digital tool; it is a governance tool. There is little doubt that working with citizens in re-designing the online service for school registration will boost the usefulness and effectiveness of the service. And failing to do so will result in yet another digital service struggling to gain adoption….(More)”

Data Is Power: Washington Needs to Craft New Rules for the Digital Age


Matthew Slaughter and David McCormick at Foreign Affairs: “…Working with all willing and like-minded nations, it should seek a structure for data that maximizes its immense economic potential without sacrificing privacy and individual liberty. This framework should take the form of a treaty that has two main parts.

First would be a set of binding principles that would foster the cross-border flow of data in the most data-intensive sectors—such as energy, transportation, and health care. One set of principles concerns how to value data and determine where it was generated. Just as traditional trade regimes require goods and services to be priced and their origins defined, so, too, must this framework create a taxonomy to classify data flows by value and source. Another set of principles would set forth the privacy standards that governments and companies would have to follow to use data. (Anonymizing data, made easier by advances in encryption and quantum computing, will be critical to this step.) A final principle, which would be conditional on achieving the other two, would be to promote as much cross-border and open flow of data as possible. Consistent with the long-established value of free trade, the parties should, for example, agree to not levy taxes on data flows—and diligently enforce that rule. And they would be wise to ensure that any negative impacts of open data flows, such as job losses or reduced wages, are offset through strong programs to help affected workers adapt to the digital economy.

Such standards would benefit every sector they applied to. Envision, for example, dozens of nations with data-sharing arrangements for autonomous vehicles, oncology treatments, and clean-tech batteries. Relative to their experience in today’s Balkanized world, researchers would be able to discover more data-driven innovations—and in more countries, rather than just in those that already have a large presence in these industries.

The second part of the framework would be free-trade agreements regulating the capital goods, intermediate inputs, and final goods and services of the targeted sectors, all in an effort to maximize the gains that might arise from data-driven innovations. Thus would the traditional forces of comparative advantage and global competition help bring new self-driving vehicles, new lifesaving chemotherapy compounds, and new sources of renewable energy to participating countries around the world. 

There is already a powerful example of such agreements. In 1996, dozens of countries accounting for nearly 95 percent of world trade in information technology ratified the Information Technology Agreement, a multilateral trade deal under the WTO. The agreement ultimately eliminated all tariffs for hundreds of IT-related capital goods, intermediate inputs, and final products—from machine tools to motherboards to personal computers. The agreement proved to be an important impetus for the subsequent wave of the IT revolution, a competitive spur that led to productivity gains for firms and price declines for consumers….(More)”.

Citizen science is booming during the pandemic


Sigal Samuel at Vox: “…The pandemic has driven a huge increase in participation in citizen science, where people without specialized training collect data out in the world or perform simple analyses of data online to help out scientists.

Stuck at home with time on their hands, millions of amateurs arouennd the world are gathering information on everything from birds to plants to Covid-19 at the request of institutional researchers. And while quarantine is mostly a nightmare for us, it’s been a great accelerant for science.

Early in the pandemic, a firehose of data started gushing forth on citizen science platforms like Zooniverse and SciStarter, where scientists ask the public to analyze their data online.It’s a form of crowdsourcing that has the added bonus of giving volunteers a real sense of community; each project has a discussion forum where participants can pose questions to each other (and often to the scientists behind the projects) and forge friendly connections.

“There’s a wonderful project called Rainfall Rescue that’s transcribing historical weather records. It’s a climate change project to understand how weather has changed over the past few centuries,” Laura Trouille, vice president of citizen science at the Adler Planetarium in Chicago and co-lead of Zooniverse, told me. “They uploaded a dataset of 10,000 weather logs that needed transcribing — and that was completed in one day!”

Some Zooniverse projects, like Snapshot Safari, ask participants to classify animals in images from wildlife cameras. That project saw daily classifications go from 25,000 to 200,000 per day in the initial days of lockdown. And across all its projects, Zooniverse reported that 200,000 participants contributed more than 5 million classifications of images in one week alone — the equivalent of 48 years of research. Although participation has slowed a bit since the spring, it’s still four times what it was pre-pandemic.

Many people are particularly eager to help tackle Covid-19, and scientists have harnessed their energy. Carnegie Mellon University’s Roni Rosenfeld set up a platform where volunteers can help artificial intelligence predict the spread of the coronavirus, even if they know nothing about AI. Researchers at the University of Washington invited people to contribute to Covid-19 drug discovery using a computer game called Foldit; they experimented with designing proteins that could attach to the virus that causes Covid-19 and prevent it from entering cells….(More)”.

Towards intellectual freedom in an AI Ethics Global Community


Paper by Christoph Ebell et al: “The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers’ work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all….(More)”.

Leave No Migrant Behind: The 2030 Agenda and Data Disaggregation


Guide by the International Organization for Migration (IOM): “To date, disaggregation of global development data by migratory status remains low. Migrants are largely invisible in official SDG data. As the global community approaches 2030, very little is known about the impact of the 2030 Agenda on migrants. Despite a growing focus worldwide on data disaggregation, namely the breaking down of data into smaller sub-categories, there is a lack of practical guidance on the topic that can be tailored to address individual needs and capacities of countries.

Developed by IOM’s Global Migration Data Analysis Centre (GMDAC), the guide titled ‘Leave No Migrant Behind: The 2030 Agenda and Data Disaggregation‘ centres on nine SDGs focusing on hunger, education, and gender equality among others. The document is the first of its kind, in that it seeks to address a range of different categorization interests and needs related to international migrants and suggests practical steps that practitioners can tailor to best fit their context…The guide also highlights the key role disaggregation plays in understanding the many positive links between migration and the SDGs, highlighting migrants’ contributions to the 2030 Agenda.

The guide outlines key steps for actors to plan and implement initiatives by looking at sex, gender, age and disability, in addition to migratory status. These steps include undertaking awareness raising, identifying priority indicators, conducting data mapping, and more….Read more about the importance of data disaggregation for SDG indicators here….(More)”

What Is Mobility Data? Where Is It Used?


Brief by Andrew J. Zahuranec, Stefaan Verhulst, Andrew Young, Aditi Ramesh, and Brennan Lake: “Mobility data is data about the geographic location of a device passively produced through normal activity. Throughout the pandemic, public health experts and public officials have used mobility data to understand patterns of COVID-19’s spread and the impact of disease control measures. However, privacy advocates and others have questioned the need for this data and raised concerns about the capacity of such data-driven tools to facilitate surveillance, improper data use, and other exploitative practices.

In April, The GovLab, Cuebiq, and the Open Data Institute released The Use of Mobility Data for Responding to the COVID-19 Pandemic, which relied on several case studies to look at the opportunities, risks, and challenges associated with mobility data. Today, we hope to supplement that report with a new resource: a brief on what mobility data is and the different types of data it can include. The piece is a one-pager to allow decision-makers to easily read it. It provides real-world examples from the report to illustrate how different data types can be used in a responsible way…..(More)”.

Socially Responsible Data Labeling


Blog By Hamed Alemohammad at Radiant Earth Foundation: “Labeling satellite imagery is the process of applying tags to scenes to provide context or confirm information. These labeled training datasets form the basis for machine learning (ML) algorithms. The labeling undertaking (in many cases) requires humans to meticulously and manually assign captions to the data, allowing the model to learn patterns and estimate them for other observations.

For a wide range of Earth observation applications, training data labels can be generated by annotating satellite imagery. Images can be classified to identify the entire image as a class (e.g., water body) or for specific objects within the satellite image. However, annotation tasks can only identify features observable in the imagery. For example, with Sentinel-2 imagery at the 10-meter spatial resolution, one cannot detect the more detailed features of interest, such as crop types but would be able to distinguish large croplands from other land cover classes.

Human error in labeling is inevitable and results in uncertainties and errors in the final label. As a result, it’s best practice to examine images multiple times and then assign a majority or consensus label. In general, significant human resources and financial investment is needed to annotate imagery at large scales.

In 2018, we identified the need for a geographically diverse land cover classification training dataset that required human annotation and validation of labels. We proposed to Schmidt Futures a project to generate such a dataset to advance land cover classification globally. In this blog post, we discuss what we’ve learned developing LandCoverNet, including the keys to generating good quality labels in a socially responsible manner….(More)”.

How we mapped billions of trees in West Africa using satellites, supercomputers and AI


Martin Brandt and Kjeld Rasmussen in The Conversation: “The possibility that vegetation cover in semi-arid and arid areas was retreating has long been an issue of international concern. In the 1930s it was first theorized that the Sahara was expanding and woody vegetation was on the retreat. In the 1970s, spurred by the “Sahel drought”, focus was on the threat of “desertification”, caused by human overuse and/or climate change. In recent decades, the potential impact of climate change on the vegetation has been the main concern, along with the feedback of vegetation on the climate, associated with the role of the vegetation in the global carbon cycle.

Using high-resolution satellite data and machine-learning techniques at supercomputing facilities, we have now been able to map billions of individual trees and shrubs in West Africa. The goal is to better understand the real state of vegetation coverage and evolution in arid and semi-arid areas.

Finding a shrub in the desert – from space

Since the 1970s, satellite data have been used extensively to map and monitor vegetation in semi-arid areas worldwide. Images are available in “high” spatial resolution (with NASA’s satellites Landsat MSS and TM, and ESA’s satellites Spot and Sentinel) and “medium or low” spatial resolution (NOAA AVHRR and MODIS).

To accurately analyse vegetation cover at continental or global scale, it is necessary to use the highest-resolution images available – with a resolution of 1 metre or less – and up until now the costs of acquiring and analysing the data have been prohibitive. Consequently, most studies have relied on moderate- to low-resolution data. This has not allowed for the identification of individual trees, and therefore these studies only yield aggregate estimates of vegetation cover and productivity, mixing herbaceous and woody vegetation.

In a new study covering a large part of the semi-arid Sahara-Sahel-Sudanian zone of West Africa, published in Nature in October 2020, an international group of researchers was able to overcome these limitations. By combining an immense amount of high-resolution satellite data, advanced computing capacities, machine-learning techniques and extensive field data gathered over decades, we were able to identify individual trees and shrubs with a crown area of more than 3 m2 with great accuracy. The result is a database of 1.8 billion trees in the region studied, available to all interested….(More)”

Supercomputing, machine learning, satellite data and field assessments allow to map billions of individual trees in West Africa. Martin Brandt, Author provided

Regulating Personal Data : Data Models and Digital Services Trade


Report by Martina Francesca Ferracane and Erik van der Marel: “While regulations on personal data diverge widely between countries, it is nonetheless possible to identify three main models based on their distinctive features: one model based on open transfers and processing of data, a second model based on conditional transfers and processing, and third a model based on limited transfers and processing. These three data models have become a reference for many other countries when defining their rules on the cross-border transfer and domestic processing of personal data.

The study reviews their main characteristics and systematically identifies for 116 countries worldwide to which model they adhere for the two components of data regulation (i.e. cross-border transfers and domestic processing of data). In a second step, using gravity analysis, the study estimates whether countries sharing the same data model exhibit higher or lower digital services trade compared to countries with different regulatory data models. The results show that sharing the open data model for cross-border data transfers is positively associated with trade in digital services, while sharing the conditional model for domestic data processing is also positively correlated with trade in digital services. Country-pairs sharing the limited model, instead, exhibit a double whammy: they show negative trade correlations throughout the two components of data regulation. Robustness checks control for restrictions in digital services, the quality of digital infrastructure, as well as for the use of alternative data sources….(More)”.

The Locus Charter


Press Release: “A coalition of location data practitioners has developed an ethics charter to promote responsible use of location technology. The Locus Charter, facilitated by The Benchmark Initiative and EthicalGEO, is a proposed set of common international principles that can guide responsible practice when using location data, including through safeguarding privacy, protecting the vulnerable, and addressing any harmful impacts of bias in data.

The Benchmark Initiative and EthicalGEO are inviting individuals, businesses, and government agencies from around the world to join The Locus Charter community and help to shape equitable and sustainable practice around the use of location data. Member organisations include the American Geographical Society and Britain’s mapping agency, Ordnance Survey.

Location data is currently at the heart of the debate around digital privacy. Tech giants Apple and Facebook are in conflict over how much apps should be able to track users. Recent research shows personal information can be inferred from location data collected from smartphones, and that anonymisation can often be reversed to reveal people’s identities. The New York Times has unveiled a largely hidden trade in location data about individual people, collected from smartphones. As phones and other devices generate more detailed location data, these challenges grow…

The Locus Charter aims to restore public trust in location technology, in order to enable its transformative power to, improve public health, enhance our response to the Covid-19 pandemic, fight climate change, protect the environment and more….(More)”.