France asks its citizens how to meet its climate-change targets


The Economist on “An experiment in consultative democracy”: “A nurse, a roofer, an electrician, a former fireman, a lycée pupil, a photographer, a teacher, a marketing manager, an entrepreneur and a civil servant. Sitting on red velvet benches in a domed art-deco amphitheatre in Paris, they and 140 colleagues are part of an unusual democratic experiment in a famously centralised country. Their mission: to draw up measures to reduce French greenhouse-gas emissions by at least 40% by 2030, in line with an eu target that is otherwise in danger of being missed (and which the European Commission now wants to tighten). Six months ago, none of them had met. Now, they have just one month left to show that they can reinvent the French democratic process—and help save the planet. “It’s our moment,” Sylvain, one of the delegates, tells his colleagues from the podium. “We have the chance to propose something historic.”

On March 6th the “citizens’ climate convention” was due to begin its penultimate three-day sitting, the sixth since it began work last October. The convention is made up of a representative sample of the French population, selected by randomly generated telephone numbers. President Emmanuel Macron devised it in an attempt to calm the country after the gilets jaunes (yellow jackets) crisis of 2018. In response to the demand for less top-down decision-making, he first launched what he grandly called a “great national debate”, which took place a year ago. He also pledged the creation of a citizens’ assembly. It is designed to focus on precisely the conundrum that provoked the original protests against a rise in the carbon tax on motor fuel: how to make green policy palatable, efficient and fair.Already signed up?…(More)”.

Is Your Data Being Collected? These Signs Will Tell You Where


Flavie Halais at Wired: “Alphabet’s Sidewalk Labs is testing icons that provide “digital transparency” when information is collected in public spaces….

As cities incorporate digital technologies into their landscapes, they face the challenge of informing people of the many sensors, cameras, and other smart technologies that surround them. Few people have the patience to read through the lengthy privacy notice on a website or smartphone app. So how can a city let them know how they’re being monitored?

Sidewalk Labs, the Google sister company that applies technology to urban problems, is taking a shot. Through a project called Digital Transparency in the Public Realm, or DTPR, the company is demonstrating a set of icons, to be displayed in public spaces, that shows where and what kinds of data are being collected. The icons are being tested as part Sidewalk Labs’ flagship project in Toronto, where it plans to redevelop a 12-acre stretch of the city’s waterfront. The signs would be displayed at each location where data would be collected—streets, parks, businesses, and courtyards.

Data collection is a core feature of the project, called Sidewalk Toronto, and the source of much of the controversy surrounding it. In 2017, Waterfront Toronto, the organization in charge of administering the redevelopment of the city’s eastern waterfront, awarded Sidewalk Labs the contract to develop the waterfront site. The project has ambitious goals: It says it could create 44,000 direct jobs by 2040 and has the potential to be the largest “climate-positive” community—removing more CO2 from the atmosphere than it produces—in North America. It will make use of new urban technology like modular street pavers and underground freight delivery. Sensors, cameras, and Wi-Fi hotspots will monitor and control traffic flows, building temperature, and crosswalk signals.

All that monitoring raises inevitable concerns about privacy, which Sidewalk aims to address—at least partly—by posting signs in the places where data is being collected.

The signs display a set of icons in the form of stackable hexagons, derived in part from a set of design rules developed by Google in 2014. Some describe the purpose for collecting the data (mobility, energy efficiency, or waste management, for example). Others refer to the type of data that’s collected, such as photos, air quality, or sound. When the data is identifiable, meaning it can be associated with a person, the hexagon is yellow. When the information is stripped of personal identifiers, the hexagon is blue…(More)”.

Good process is vital for good government


Andrea Siodmok and Matthew Taylor at the RSA: “…‘Bad’ process is time wasting and energy sapping. It can reinforce barriers to collaboration, solidify hierarchies and hamper adaptiveness.

‘Good process’ energises people, creates spaces for different ideas to emerge, builds trust and collective capacity.

The bad and good could be distinguished along several dimensions. Here are some:

Bad process:

  • Routine/happens because it happens            
  • Limited preparation and follow through         
  • Little or no facilitation            
  • Reinforces hierarchies, excludes key voices  
  • Rigid accountability focussed on blame           
  • Always formal and mandated           
  • Low trust/transactional       

Good process:

  • Mission/goal oriented – happens because it makes a difference
  • Sees process as part of a flow of change – clear accountability
  • Facilitated by people with necessary skills and techniques 
  • Inclusive, what matters is the quality of contributions not their source
  • Collective accountability focussed on learning 
  • Mixes formal and informal settings and methods, often voluntary
  • Trust enhancing/collaborative

Why is bad process so prevalent and good process so rare?

Because bad process is often the default. In the short term, bad process is easier, less intensive-resource, and less risky than good process.

Bringing people together in inclusive processes

Bringing key actors together in inclusive processes help us both understand the system that is maintaining the status quo and building a joint sense of mission for a new status quo.

It also helps people start to identify and organise around key opportunities for change. 

One of the most positive developments to have occurred in and around Whitehall in recent years is the emergence of informal, system spanning networks of public officials animated by shared values and goals such as One Team Gov and a whole host of bottom up networks on topics as diverse as wellbeing, inclusion, and climate change….(More)”.

How Singapore sends daily Whatsapp updates on coronavirus


Medha Basu at GovInsider: “How do you communicate with citizens as a pandemic stirs fear and spreads false news? Singapore has trialled WhatsApp to give daily updates on the Covid-19 virus.

The World Health Organisation’s chief praised Singapore’s reaction to the outbreak. “We are very impressed with the efforts they are making to find every case, follow up with contacts, and stop transmission,” Tedros Adhanom Ghebreyesus said.

Since late January, the government has been providing two to three daily updates on cases via the messaging app. “Fake news is typically propagated through Whatsapp, so messaging with the same interface can help stem this flow,” Sarah Espaldon, Operations Marketing Manager from Singapore’s Open Government Products unit told GovInsider….

The niche system became newly vital as Covid-19 arrived, with fake news and fear following quickly in a nation that still remembers the fatal SARS outbreak of 2003. The tech had to be upgraded to ensure it could cope with new demand, and get information out rapidly before misinformation could sow discord.

The Open Government Products team used three tools to adapt Whatsapp and create a rapid information sharing system.

1. AI Translation

Singapore has four official languages – Chinese, English, Malay and Tamil. Government used an AI tool to rapidly translate the material from English, so that every community receives the information as quickly as possible.

An algorithm produces the initial draft of the translation, which is then vetted by civil servants before being sent out on WhatsApp. The AI was trained using text from local government communications so is able to translate references and names of Singapore government schemes. This project was built by the Ministry of Communication and Information and Agency for Science, Technology and Research in collaboration with GovTech.

2. Make it easy to sign up

People specify their desired language through an easy sign up form. Singapore used Form.Sg, a tool that allows officials to launch a new mailing list in 30 minutes and connect to other government systems. A government-built form ensures that data is end-to-end encrypted and connected to the government cloud.

3. Fast updates

The updates were initially too slow in reaching people. It took four hours to add new subscribers to the recipient list and the system could send only 10 messages per second. “With 500,000 subscribers, it would take almost 14 hours for the last person to get the message,” Espaldon says….(More)”.

Beyond Randomized Controlled Trials


Iqbal Dhaliwal, John Floretta & Sam Friedlander at SSIR: “…In its post-Nobel phase, one of J-PAL’s priorities is to unleash the treasure troves of big digital data in the hands of governments, nonprofits, and private firms. Primary data collection is by far the most time-, money-, and labor-intensive component of the vast majority of experiments that evaluate social policies. Randomized evaluations have been constrained by simple numbers: Some questions are just too big or expensive to answer. Leveraging administrative data has the potential to dramatically expand the types of questions we can ask and the experiments we can run, as well as implement quicker, less expensive, larger, and more reliable RCTs, an invaluable opportunity to scale up evidence-informed policymaking massively without dramatically increasing evaluation budgets.

Although administrative data hasn’t always been of the highest quality, recent advances have significantly increased the reliability and accuracy of GPS coordinates, biometrics, and digital methods of collection. But despite good intentions, many implementers—governments, businesses, and big NGOs—aren’t currently using the data they already collect on program participants and outcomes to improve anti-poverty programs and policies. This may be because they aren’t aware of its potential, don’t have the in-house technical capacity necessary to create use and privacy guidelines or analyze the data, or don’t have established partnerships with researchers who can collaborate to design innovative programs and run rigorous experiments to determine which are the most impactful. 

At J-PAL, we are leveraging this opportunity through a new global research initiative we are calling the “Innovations in Data and Experiments for Action” Initiative (IDEA). IDEA supports implementers to make their administrative data accessible, analyze it to improve decision-making, and partner with researchers in using this data to design innovative programs, evaluate impact through RCTs, and scale up successful ideas. IDEA will also build the capacity of governments and NGOs to conduct these types of activities with their own data in the future….(More)”.

Open peer-review platform for COVID-19 preprints


Michael A. Johansson & Daniela Saderi in Nature: “The public call for rapid sharing of research data relevant to the COVID-19 outbreak (see go.nature.com/2t1lyp6) is driving an unprecedented surge in (unrefereed) preprints. To help pinpoint the most important research, we have launched Outbreak Science Rapid PREreview, with support from the London-based charity Wellcome. This is an open-source platform for rapid review of preprints related to emerging outbreaks (see https://outbreaksci.prereview.org).

These reviews comprise responses to short, yes-or-no questions, with optional commenting. The questions are designed to capture structured, high-level input on the importance and quality of the research, which can be aggregated across several reviews. Scientists who have ORCID IDs can submit their reviews as they read the preprints (currently limited to the medRxiv, bioRxiv and arXiv repositories). The reviews are open and can be submitted anonymously.

Outbreaks of pathogens such as the SARS-CoV-2 coronavirus that is responsible for COVID-19 move fast and can affect anyone. Research to support outbreak response needs to be fast and open, too, as do mechanisms to review outbreak-related research. Help other scientists, as well as the media, journals and public-health officials, to find the most important COVID-19 preprints now….(More)”.

Invest 5% of research funds in ensuring data are reusable


Barend Mons at Nature: “It is irresponsible to support research but not data stewardship…

Many of the world’s hardest problems can be tackled only with data-intensive, computer-assisted research. And I’d speculate that the vast majority of research data are never published. Huge sums of taxpayer funds go to waste because such data cannot be reused. Policies for data reuse are falling into place, but fixing the situation will require more resources than the scientific community is willing to face.

In 2013, I was part of a group of Dutch experts from many disciplines that called on our national science funder to support data stewardship. Seven years later, policies that I helped to draft are starting to be put into practice. These require data created by machines and humans to meet the FAIR principles (that is, they are findable, accessible, interoperable and reusable). I now direct an international Global Open FAIR office tasked with helping communities to implement the guidelines, and I am convinced that doing so will require a large cadre of professionals, about one for every 20 researchers.

Even when data are shared, the metadata, expertise, technologies and infrastructure necessary for reuse are lacking. Most published data sets are scattered into ‘supplemental files’ that are often impossible for machines or even humans to find. These and other sloppy data practices keep researchers from building on each other’s work. In cases of disease outbreaks, for instance, this might even cost lives….(More)”.

Facial Recognition Software requires Checks and Balances


David Eaves,  and Naeha Rashid in Policy Options: “A few weeks ago, members of the Nexus traveller identification program were notified that Canadian Border Services is upgrading its automated system, from iris scanners to facial recognition technology. This is meant to simplify identification and increase efficiency without compromising security. But it also raises profound questions concerning how we discuss and develop public policies around such technology – questions that may not be receiving sufficiently open debate in the rush toward promised greater security.

Analogous to the U.S. Customs and Border Protection (CBP) program Global Entry, Nexus is a joint Canada-US border control system designed for low-risk, pre-approved travellers. Nexus does provide a public good, and there are valid reasons to improve surveillance at airports. Even before 9/11, border surveillance was an accepted annoyance and since then, checkpoint operations have become more vigilant and complex in response to the public demand for safety.

Nexus is one of the first North America government-sponsored services to adopt facial recognition, and as such it could be a pilot program that other services will follow. Left unchecked, the technology will likely become ubiquitous at North American border crossings within the next decade, and it will probably be adopted by governments to solve domestic policy challenges.

Facial recognition software is imperfect and has documented bias, but it will continue to improve and become superior to humans in identifying individuals. Given this, questions arise such as, what policies guide the use of this technology? What policies should inform future government use? In our headlong rush toward enhanced security, we risk replicating the justification the used by the private sector in an attempt to balance effectiveness, efficiency and privacy.

One key question involves citizens’ capacity to consent. Previously, Nexus members submitted to fingerprint and retinal scans – biometric markers that are relatively unique and enable government to verify identity at the border. Facial recognition technology uses visual data and seeks, analyzes, and stores identifying facial information in a database, which is then used to compare with new images and video….(More)”.

How big data is dividing the public in China’s coronavirus fight – green, yellow, red


Article by Viola Zhou: “On Valentine’s Day, a 36-year-old lawyer Matt Ma in the eastern Chinese province of Zhejiang discovered he had been coded “red”.The colour, displayed in a payment app on his smartphone, indicated that he needed to be quarantined at home even though he had no symptoms of the dangerous coronavirus.

Without a green light from the system, Ma could not travel from his ancestral hometown of Lishui to his new home city of Hangzhou, which is now surrounded by checkpoints set up to contain the epidemic.

Ma is one of the millions of people whose movements are being choreographed by the government through software that feeds on troves of data and issues orders that effectively dictate whether they must stay in or can go to work.Their experience represents a slice of China’s desperate attempt to stop the coronavirus by using a mixed bag of cutting-edge technologies and old-fashioned surveillance. It was also a rare real-world test of the use of technology on a large scale to halt the spread of communicable diseases.

“This kind of massive use of technology is unprecedented,” said Christos Lynteris, a medical anthropologist at the University of St Andrews who has studied epidemics in China.

But Hangzhou’s experiment has also revealed the pitfalls of applying opaque formulas to a large population.

In the city’s case, there are reports of people being marked incorrectly, falling victim to an algorithm that is, by the government’s own admission, not perfect….(More)”.

Who will benefit most from the data economy?


Special Report by The Economist: “The data economy is a work in progress. Its economics still have to be worked out; its infrastructure and its businesses need to be fully built; geopolitical arrangements must be found. But there is one final major tension: between the wealth the data economy will create and how it will be distributed. The data economy—or the “second economy”, as Brian Arthur of the Santa Fe Institute terms it—will make the world a more productive place no matter what, he predicts. But who gets what and how is less clear. “We will move from an economy where the main challenge is to produce more and more efficiently,” says Mr Arthur, “to one where distribution of the wealth produced becomes the biggest issue.”

The data economy as it exists today is already very unequal. It is dominated by a few big platforms. In the most recent quarter, Amazon, Apple, Alphabet, Microsoft and Facebook made a combined profit of $55bn, more than the next five most valuable American tech firms over the past 12 months. This corporate inequality is largely the result of network effects—economic forces that mean size begets size. A firm that can collect a lot of data, for instance, can make better use of artificial intelligence and attract more users, who in turn supply more data. Such firms can also recruit the best data scientists and have the cash to buy the best ai startups.

It is also becoming clear that, as the data economy expands, these sorts of dynamics will increasingly apply to non-tech companies and even countries. In many sectors, the race to become a dominant data platform is on. This is the mission of Compass, a startup, in residential property. It is one goal of Tesla in self-driving cars. And Apple and Google hope to repeat the trick in health care. As for countries, America and China account for 90% of the market capitalisation of the world’s 70 largest platforms (see chart), Africa and Latin America for just 1%. Economies on both continents risk “becoming mere providers of raw data…while having to pay for the digital intelligence produced,” the United Nations Conference on Trade and Development recently warned.

Yet it is the skewed distribution of income between capital and labour that may turn out to be the most pressing problem of the data economy. As it grows, more labour will migrate into the mirror worlds, just as other economic activity will. It is not only that people will do more digitally, but they will perform actual “data work”: generating the digital information needed to train and improve ai services. This can mean simply moving about online and providing feedback, as most people already do. But it will increasingly include more active tasks, such as labelling pictures, driving data-gathering vehicles and perhaps, one day, putting one’s digital twin through its paces. This is the reason why some say ai should actually be called “collective intelligence”: it takes in a lot of human input—something big tech firms hate to admit….(More)”.