How Ireland’s Citizens’ Assembly helped climate action


Blog post by Frances Foley: “..In July 2016, the new government – led by Fine Gael, backed by independents – put forward a bill to establish a national-level Citizens’ Assembly to look at the biggest issues of the day. These included the challenges of an ageing population; the role fixed-term parliaments; referendums; the 8th Amendment on abortion; and climate change.

Citizens from every region, every socio-economic background, each ethnicity and age group and from right across the spectrum of political opinion convened over the course of two weekends between September and November 2017. The issue seemed daunting in scale and complexity, but the participants had been well-briefed and had at their disposal a line up of experts, scientists, advocates and other witnesses who would help them make sense of the material. By the end, citizens had produced a radical series of recommendations which went far beyond what any major Irish party was promising, surprising even the initiators of the process….

As expected, the passage for some of the proposals through the Irish party gauntlet has not been smooth. The 8-hour long debate on increasing the carbon tax, for example, suggests that mixing deliberative and representative democracy still produces conflict and confusion. It is certainly clear that parliaments have to adapt and develop if citizens’ assemblies are ever to find their place in our modern democracies.

But the most encouraging move has been the simple acknowledgement that many of the barriers to implementation lie at the level of governance. The new Climate Action Commission, with a mandate to monitor climate action across government, should act as the governmental guarantor of the vision from the Citizens’ Assembly. Citizens’ proposals have themselves stimulated a review of internal government processes to stop their demands getting mired in party wrangling and government bureaucracy. By their very nature, the success of citizens’ assemblies can also provide an alternative vision of how decisions can be made – and in so doing shame political parties and parliaments into improving their decision-making practices.

Does the Irish Citizens’ Assembly constitute a case of rapid transition? In terms of its breadth, scale and vision, the experiment is impressive. But in terms of speed, deliberative processes are often criticised for being slow, unwieldly and costly. The response to this should be to ask what we’re getting: whilst an Assembly is not the most rapid vehicle for change – most serious processes take several months, if not a couple of years – the results, both in specific outcomes and in cultural or political shifts – can be astounding….

In respect to climate change, this harmony between ends and means is particularly significant. The climate crisis is the most severe collective decision-making challenge of our times, one that demands courage, but also careful thought….(More)”.

AI & Global Governance: Robots Will Not Only Wage Future Wars but also Future Peace


Daanish Masood & Martin Waehlisch at the United Nations University: “At the United Nations, we have been exploring completely different scenarios for AI: its potential to be used for the noble purposes of peace and security. This could revolutionize the way of how we prevent and solve conflicts globally.

Two of the most promising areas are Machine Learning and Natural Language Processing. Machine Learning involves computer algorithms detecting patterns from data to learn how to make predictions and recommendations. Natural Language Processing involves computers learning to understand human languages.

At the UN Secretariat, our chief concern is with how these emerging technologies can be deployed for the good of humanity to de-escalate violence and increase international stability.

This endeavor has admirable precedent. During the Cold War, computer scientists used multilayered simulations to predict the scale and potential outcome of the arms race between the East and the West.

Since then, governments and international agencies have increasingly used computational models and advanced Machine Learning to try to understand recurrent conflict patterns and forecast moments of state fragility.

But two things have transformed the scope for progress in this field.

The first is the sheer volume of data now available from what people say and do online. The second is the game-changing growth in computational capacity that allows us to crunch unprecedented, inconceivable quantities data with relative speed and ease.

So how can this help the United Nations build peace? Three ways come to mind.

Firstly, overcoming cultural and language barriers. By teaching computers to understand human language and the nuances of dialects, not only can we better link up what people write on social media to local contexts of conflict, we can also more methodically follow what people say on radio and TV. As part of the UN’s early warning efforts, this can help us detect hate speech in a place where the potential for conflict is high. This is crucial because the UN often works in countries where internet coverage is low, and where the spoken languages may not be well understood by many of its international staff.

Natural Language Processing algorithms can help to track and improve understanding of local debates, which might well be blind spots for the international community. If we combine such methods with Machine Learning chatbots, the UN could conduct large-scale digital focus groups with thousands in real-time, enabling different demographic segments in a country to voice their views on, say, a proposed peace deal – instantly testing public support, and indicating the chances of sustainability.

Secondly, anticipating the deeper drivers of conflict. We could combine new imaging techniques – whether satellites or drones – with automation. For instance, many parts of the world are experiencing severe groundwater withdrawal and water aquifer depletion. Water scarcity, in turn, drives conflicts and undermines stability in post-conflict environments, where violence around water access becomes more likely, along with large movements of people leaving newly arid areas.

One of the best predictors of water depletion is land subsidence or sinking, which can be measured by satellite and drone imagery. By combining these imaging techniques with Machine Learning, the UN can work in partnership with governments and local communities to anticipate future water conflicts and begin working proactively to reduce their likelihood.

Thirdly, advancing decision making. In the work of peace and security, it is surprising how many consequential decisions are still made solely on the basis of intuition.

Yet complex decisions often need to navigate conflicting goals and undiscovered options, against a landscape of limited information and political preference. This is where we can use Deep Learning – where a network can absorb huge amounts of public data and test it against real-world examples on which it is trained while applying with probabilistic modeling. This mathematical approach can help us to generate models of our uncertain, dynamic world with limited data.

With better data, we can eventually make better predictions to guide complex decisions. Future senior peace envoys charged with mediating a conflict would benefit from such advances to stress test elements of a peace agreement. Of course, human decision-making will remain crucial, but would be informed by more evidence-driven robust analytical tools….(More)”.

Introducing the Contractual Wheel of Data Collaboration


Blog by Andrew Young and Stefaan Verhulst: “Earlier this year we launched the Contracts for Data Collaboration (C4DC) initiative — an open collaborative with charter members from The GovLab, UN SDSN Thematic Research Network on Data and Statistics (TReNDS), University of Washington and the World Economic Forum. C4DC seeks to address the inefficiencies of developing contractual agreements for public-private data collaboration by informing and guiding those seeking to establish a data collaborative by developing and making available a shared repository of relevant contractual clauses taken from existing legal agreements. Today TReNDS published “Partnerships Founded on Trust,” a brief capturing some initial findings from the C4DC initiative.

The Contractual Wheel of Data Collaboration [beta]

The Contractual Wheel of Data Collaboration [beta] — Stefaan G. Verhulst and Andrew Young, The GovLab

As part of the C4DC effort, and to support Data Stewards in the private sector and decision-makers in the public and civil sectors seeking to establish Data Collaboratives, The GovLab developed the Contractual Wheel of Data Collaboration [beta]. The Wheel seeks to capture key elements involved in data collaboration while demystifying contracts and moving beyond the type of legalese that can create confusion and barriers to experimentation.

The Wheel was developed based on an assessment of existing legal agreements, engagement with The GovLab-facilitated Data Stewards Network, and analysis of the key elements of our Data Collaboratives Methodology. It features 22 legal considerations organized across 6 operational categories that can act as a checklist for the development of a legal agreement between parties participating in a Data Collaborative:…(More)”.

Selling civic engagement: A unique role for the private sector?


Rebecca Winthrop at Brookings: “Much has been written on the worrisome trends in Americans’ faith and participation in our nation’s democracy. According to the World Values Survey, almost 20 percent of millennials in the U.S. think that military rule or an authoritarian dictator is a “fairly good” form of government, and only 29 percent believe that living in a country that is governed democratically is “absolutely important.” In the last year, trust in American democratic institutions has dropped—only 53 percent of Americans view American democracy positively. This decline in faith and participation in our democracy has been ongoing for some time, as noted in the 2005 collection of essays, “Democracy At Risk: How Political Choices Undermine Citizen Participation, and What We Can Do About It.” The essays chart the “erosion of the activities and capacities of citizenship” from voting to broad civic engagement over the past several decades.

While civil society and government have been the actors most commonly addressing this worrisome trend, is there also a constructive role for the private sector to play? After all, compared to other options like military or authoritarian rule, a functioning democracy is much more likely to provide the conditions for free enterprise that business desires. One only has to look to the current events in Venezuela for a quick reminder of this.

Many companies do engage in a range of activities that broadly support civic engagement, from dedicating corporate social responsibility (CSR) dollars to civically-minded community activities to supporting employee volunteerism. These are worthy activities and should certainly continue, but given the crisis of faith in the foundations of our democratic process, the private sector could play a much bigger role in helping support a movement for renewed understanding of and participation in our political process. Many of the private sector’s most powerful tools for doing this lie not inside companies’ CSR portfolios but in their unique expertise in selling things. Every day companies leverage their expertise in influence—from branding to market-segmentation—to get Americans to use their products and services. What if this expertise were harnessed toward promoting civic understanding and engagement?

Companies could play a particularly useful role by tapping new resources to amplify existing good work and build increasing interest in civic engagement. Two ways of doing this could include the below….(More)”

Finding Wisdom in Politically Polarized Crowds


Eamon Duede at Nature Research: “We were seeing that the consumption of ideas seemed deeply related io political alignment, and because our group (Knowledge Lab) is concerned with understanding the social dynamics involved in production of ideas, we began wondering whether and to what extent the political alignment of individuals contributes to a group’s ability to produce knowledge. A Wikipedia article is full of smuggled content and worked into a narrative by a diverse team of editors. Because those articles constitute knowledge, we were curious to know whether political polarization within those teams had an effect on the quality of that production. So, we decided to braid both strands of research together and look at the way in which individual political alignments and the polarization of the teams they form affect the quality of the work that is produced collaboratively on Wikipedia.

To answer this question, we turned not to the article itself, but the immense history of articles on Wikipedia. Every edit to every article, no matter how insignificant, is documented and saved in Wikipedia’s astonishingly massive archives. And every edit to every article, no matter how insignificant, is evaluated for its relevance or validity by the vast community of editors, both robotic and human. Remarkable teamwork has gone into producing the encyclopedia. Some people edit randomly, simply cleaning typos, adding citations, or contributing graffiti and vandalism (I’ve experimented with this, and it gets painted over very quickly, no matter where you put it). Yet, many people are genuinely purposeful in their work, and contribute specifically to topics on which they have both interest and knowledge. They tend and grow a handful of articles or a few broad topics like gardeners. We walked through the histories of these gardens, looking back at who made contributions here and there, how much they contributed, and where. We thought that editors who make frequent contributions to pages associated with American liberalism would hold left leaning opinions, and for conservatism opinions on the right. This was a controversial hypothesis, and many in the Wikipedia community felt that perhaps the opposite would be true, with liberals correcting conservative pages and conservatives kindly returning the favor -like weeding or applying pesticide. But a survey we conducted of active Wikipedia editors found that building a function over the relative number of bits they contributed to liberal versus conservative pages predicted more than a third of the probability that they identified as such and voted accordingly.

Following this validation, we assigned a political alignment score to hundreds of thousands of editors by looking at where they make contributions, and then examined the polarization within teams of editors that produced hundreds of thousands of Wikipedia articles in the broad topic areas of politics, social issues, and science. We found that when most members of a team have the same political alignment, whether conservative, liberal, or “independent”, the quality of the Wikipedia pages they produce is not as strong as those of teams with polarized compositions of editors (Shi et al. 2019).

The United States Senate is increasingly polarized, but largely balanced in its polarization. If the Senate was trying to write a Wikipedia article, would they produce a high quality article? If they are doing so on Wikipedia, following norms of civility and balance inscribed within Wikipedia’s policies and guidelines, committed to the production of knowledge rather than self-promotion, then the answer is probably “yes”. That is a surprising finding. We think that the reason for this is that the policies of Wikipedia work to suppress the kind of rhetoric and sophistry common in everyday discourse, not to mention toxic language and name calling. Wikipedia’s policies are intolerant of discussion that could distort balanced consideration of the edit and topic under consideration, and, given that these policies shut down discourse that could bias proposed edits, teams with polarized viewpoints have to spend significantly more time discussing and debating the content that is up for consideration for inclusion in an article. These diverse viewpoints seem to bring out points and arguments between team members that sharpen and refine the quality of the content they can collectively agree to. With assumptions and norms of respect and civility, political polarization can be powerful and generative….(More)”

Data Collaboratives as an enabling infrastructure for AI for Good


Blog Post by Stefaan G. Verhulst: “…The value of data collaboratives stems from the fact that the supply of and demand for data are generally widely dispersed — spread across government, the private sector, and civil society — and often poorly matched. This failure (a form of “market failure”) results in tremendous inefficiencies and lost potential. Much data that is released is never used. And much data that is actually needed is never made accessible to those who could productively put it to use.

Data collaboratives, when designed responsibly, are the key to addressing this shortcoming. They draw together otherwise siloed data and a dispersed range of expertise, helping match supply and demand, and ensuring that the correct institutions and individuals are using and analyzing data in ways that maximize the possibility of new, innovative social solutions.

Roadmap for Data Collaboratives

Despite their clear potential, the evidence base for data collaboratives is thin. There’s an absence of a systemic, structured framework that can be replicated across projects and geographies, and there’s a lack of clear understanding about what works, what doesn’t, and how best to maximize the potential of data collaboratives.

At the GovLab, we’ve been working to address these shortcomings. For emerging economies considering the use of data collaboratives, whether in pursuit of Artificial Intelligence or other solutions, we present six steps that can be considered in order to create data collaborative that are more systematic, sustainable, and responsible.

The need for making Data Collaboratives Systematic, Sustainable and Responsible
  • Increase Evidence and Awareness
  • Increase Readiness and Capacity
  • Address Data Supply and Demand Inefficiencies and Uncertainties
  • Establish a New “Data Stewards” Function
  • Develop and strengthen policies and governance practices for data collaboration

Safeguards for human studies can’t cope with big data


Nathaniel Raymond at Nature: “One of the primary documents aiming to protect human research participants was published in the US Federal Register 40 years ago this week. The Belmont Report was commissioned by Congress in the wake of the notorious Tuskegee syphilis study, in which researchers withheld treatment from African American men for years and observed how the disease caused blindness, heart disease, dementia and, in some cases, death.

The Belmont Report lays out core principles now generally required for human research to be considered ethical. Although technically governing only US federally supported research, its influence reverberates across academia and industry globally. Before academics with US government funding can begin research involving humans, their institutional review boards (IRBs) must determine that the studies comply with regulation largely derived from a document that was written more than a decade before the World Wide Web and nearly a quarter of a century before Facebook.

It is past time for a Belmont 2.0. We should not be asking those tasked with protecting human participants to single-handedly identify and contend with the implications of the digital revolution. Technological progress, including machine learning, data analytics and artificial intelligence, has altered the potential risks of research in ways that the authors of the first Belmont report could not have predicted. For example, Muslim cab drivers can be identified from patterns indicating that they stop to pray; the Ugandan government can try to identify gay men from their social-media habits; and researchers can monitor and influence individuals’ behaviour online without enrolling them in a study.

Consider the 2014 Facebook ‘emotional contagion study’, which manipulated users’ exposure to emotional content to evaluate effects on mood. That project, a collaboration with academic researchers, led the US Department of Health and Human Services to launch a long rule-making process that tweaked some regulations governing IRBs.

A broader fix is needed. Right now, data science overlooks risks to human participants by default….(More)”.

Synthetic data: innovation for public good


Blog Post by Catrin Cheung: “What is synthetic data, and how can it be used for public good? ….Synthetic data are artificially generated data that have the look and structure of real data, but do not contain any information on individuals. They also contain more general characteristics that are used to find patterns in the data.

They are modelled on real data, but designed in a way which safeguards the legal, ethical and confidentiality requirements of the original data. Given their resemblance to the original data, synthetic data are useful in a range of situations, for example when data is sensitive or missing. They are used widely as teaching materials, to test code or mathematical models, or as training data for machine learning models….

There’s currently a wealth of research emerging from the health sector, as the nature of data published is often sensitive. Public Health England have synthesised cancer data which can be freely accessed online. NHS Scotland are making advances in cutting-edge machine learning methods such as Variational Auto Encoders and Generative Adversarial Networks (GANs).

There is growing interest in this area of research, and its influence extends beyond the statistical community. While the Data Science Campus have also used GANs to generate synthetic data in their latest research, its power is not limited to data generation. It can be trained to construct features almost identical to our own across imagery, music, speech and text. In fact, GANs have been used to create a painting of Edmond de Belamy, which sold for $432,500 in 2018!

Within the ONS, a pilot to create synthetic versions of securely held Labour Force Survey data has been carried out using a package in R called “synthpop”. This synthetic dataset can be shared with approved researchers to de-bug codes, prior to analysis of data held in the Secure Research Service….

Although much progress is done in this field, one challenge that persists is guaranteeing the accuracy of synthetic data. We must ensure that the statistical properties of synthetic data match properties of the original data.

Additional features, such as the presence of non-numerical data, add to this difficult task. For example, if something is listed as “animal” and can take the possible values “dog”,”cat” or “elephant”, it is difficult to convert this information into a format suitable for precise calculations. Furthermore, given that datasets have different characteristics, there is no straightforward solution that can be applied to all types of data….particular focus was also placed on the use of synthetic data in the field of privacy, following from the challenges and opportunities identified by the National Statistician’s Quality Review of privacy and data confidentiality methods published in December 2018….(More)”.

Digital Data for Development


LinkedIn: “The World Bank Group and LinkedIn share a commitment to helping workers around the world access opportunities that make good use of their talents and skills. The two organizations have come together to identify new ways that data from LinkedIn can help inform policymakers who seek to boost employment and grow their economies.

This site offers data and automated visuals of industries where LinkedIn data is comprehensive enough to provide an emerging picture. The data complements a wealth of official sources and can offer a more real-time view in some areas particularly for new, rapidly changing digital and technology industries.

The data shared in the first phase of this collaboration focuses on 100+ countries with at least 100,000 LinkedIn members each, distributed across 148 industries and 50,000 skills categories. In the near term, it will help World Bank Group teams and government partners pinpoint ways that developing countries could stimulate growth and expand opportunity, especially as disruptive technologies reshape the economic landscape. As LinkedIn’s membership and digital platforms continue to grow in developing countries, this collaboration will assess the possibility to expand the sectors and countries covered in the next annual update.

This site offers downloadable data, visualizations, and an expanding body of insights and joint research from the World Bank Group and LinkedIn. The data is being made accessible as a public good, though it will be most useful for policy analysts, economists, and researchers….(More)”.

Statistics Estonia to coordinate data governance


Article by Miriam van der Sangen at CBS: “In 2018, Statistics Estonia launched a new strategy for the period 2018-2022. This strategy addresses the organisation’s aim to produce statistics more quickly while minimising the response burden on both businesses and citizens. Another element in the strategy is addressing the high expectations in Estonian society regarding the use of data. ‘We aim to transform Statistics Estonia into a national data agency,’ says Director General Mägi. ‘This means our role as a producer of official statistics will be enlarged by data governance responsibilities in the public sector. Taking on such responsibilities requires a clear vision of the whole public data ecosystem and also agreement to establish data stewards in most public sector institutions.’…

the Estonian Parliament passed new legislation that effectively expanded the number of official tasks for Statistics Estonia. Mägi elaborates: ‘Most importantly, we shall be responsible for coordinating data governance. The detailed requirements and conditions of data governance will be specified further in the coming period.’ Under the new Act, Statistics Estonia will also have more possibilities to share data with other parties….

Statistics Estonia is fully committed to producing statistics which are based on big data. Mägi explains: ‘At the moment, we are actively working on two big data projects. One project involves the use of smart electricity meters. In this project, we are looking into ways to visualise business and household electricity consumption information. The second project involves web scraping of prices and enterprise characteristics. This project is still in an initial phase, but we can already see that the use of web scraping can improve the efficiency of our production process.’ We are aiming to extend the web scraping project by also identifying e-commerce and innovation activities of enterprises.’

Yet another ambitious goal for Statistics Estonia lies in the field of data science. ‘Similarly to Statistics Netherlands, we established experimental statistics and data mining activities years ago. Last year, we developed a so-called think-tank service, providing insights from data into all aspects of our lives. Think of birth, education, employment, et cetera. Our key clients are the various ministries, municipalities and the private sector. The main aim in the coming years is to speed up service time thanks to visualisations and data lake solutions.’ …(More)”.