The Delusions of Crowds: Why People Go Mad in Groups


Book by William J. Bernstein: “…Inspired by Charles Mackay’s 19th-century classic Memoirs of Extraordinary Popular Delusions and the Madness of Crowds, Bernstein engages with mass delusion with the same curiosity and passion, but armed with the latest scientific research that explains the biological, evolutionary, and psychosocial roots of human irrationality. Bernstein tells the stories of dramatic religious and financial mania in western society over the last 500 years—from the Anabaptist Madness that afflicted the Low Countries in the 1530s to the dangerous End-Times beliefs that animate ISIS and pervade today’s polarized America; and from the South Sea Bubble to the Enron scandal and dot com bubbles of recent years. Through Bernstein’s supple prose, the participants are as colorful as their motivation, invariably “the desire to improve one’s well-being in this life or the next.”

As revealing about human nature as they are historically significant, Bernstein’s chronicles reveal the huge cost and alarming implications of mass mania: for example, belief in dispensationalist End-Times has over decades profoundly affected U.S. Middle East policy. Bernstein observes that if we can absorb the history and biology of mass delusion, we can recognize it more readily in our own time, and avoid its frequently dire impact….(More)”.

Building on a year of open data: progress and promise


Jennifer Yokoyama at Microsoft: “…The biggest takeaway from our work this past year – and the one thing I hope any reader of this post will take away – is that data collaboration is a spectrum. From the presence (or absence) of data to how open that data is to the trust level of the collaboration participants, these factors may necessarily lead to different configurations and different goals, but they can all lead to more open data and innovative insights and discoveries.

Here are a few other lessons we have learned over the last year:

  1. Principles set the foundation for stakeholder collaboration: When we launched the Open Data Campaign, we adopted five principles that guide our contributions and commitments to trusted data collaborations: Open, Usable, Empowering, Secure and Private. These principles underpin our participation, but importantly, organizations can build on them to establish responsible ways to share and collaborate around their data. The London Data Commission, for example, established a set of data sharing principles for public- and private-sector organizations to ensure alignment and to guide the participating groups in how they share data.
  2. There is value in pilot projects: Traditionally, data collaborations with several stakeholders require time – often including a long runway for building the collaboration, plus the time needed to execute on the project and learn from it. However, our learnings show short-term projects that experiment and test data collaborations can provide valuable insights. The London Data Commission did exactly that with the launch of four short-term pilot projects. Due to the success of the pilots, the partners are exploring how they can be expanded upon.
  3. Open data doesn’t require new data: Identifying data to share does not always mean it must be newly shared data; sometimes the data was narrowly shared, but can be shared more broadly, made more accessible or analyzed for a different purpose. Microsoft’s environmental indicator data is an example of data that was already disclosed in certain venues, but was then made available to the Linux Foundation’s OS-Climate Initiative to be consumed through analytics, thereby extending its reach and impact…

To get started, we suggest that emerging data collaborations make use of the wealth of existing resources. When embarking on data collaborations, we leveraged many of the definitions, toolkits and guides from leading organizations in this space. As examples, resources such as the Open Data Institute’s Data Ethics Canvas are extremely useful as a framework to develop ethical guidance. Additionally, The GovLab’s Open Data Policy Lab and Executive Course on Data Stewardship, both supported by Microsoft, highlight important case studies, governance considerations and frameworks when sharing data. If you want to learn more about the exciting work our partners are doing, check out the latest posts from the Open Data Institute and GovLab…(More)”. See also Open Data Policy Lab.

Artificial Intelligence in Migration: Its Positive and Negative Implications


Article by Priya Dialani: “Research and development in new technologies for migration management are rapidly increasing. To quote certain migration examples, big data was used to predict population movements in the Mediterranean, AI lie detectors used at the European border, and the recent one is the government of Canada using automated decision-making in immigration and refugee applications. Artificial intelligence in migration is helping countries to manage international migration.

Every corner of the world is encountering an unprecedented number of challenging migration crises. As an increasing number of people are interacting with immigration and refugee determination systems, nations are taking a stab at artificial intelligence. AI in global immigration is helping countries to automate a plethora of decisions that are made almost daily as people want to cross borders and look for new homes.

AI projects in migration management can help in predicting the next migration crisis with better accuracy. Artificial intelligence can predict the movements of people migrating by taking into account different types of data such as WiFi positioning, Google Trends, etc. This data can further help the nations and government to be prepared more efficiently for mass migration. Governments can use AI algorithms to examine huge datasets and look for potential gaps in their reception facilities such as the absence of appropriate places for people or vulnerable unaccompanied children.

Recognizing such gaps can allow the government to alter their reception conditions as well as be prepared to comply with their legal obligations under international human rights law (IHRL).

AI applications can also help in changing the lives of asylum seekers and refugees. AI machine learning and optimized algorithms are helping in improving refugee integration. Annie MOORE (Matching Outcome Optimization for Refugee Empowerment) is one such project that matches refugees to communities where they can find the resources and environment as per their preferences and needs.

Asylum seekers or refugees most of the time lack access to lawyers and legal advice. A UK-based chatbot DoNotPay provides free legal advice to asylum seekers using intelligent algorithms. It also provides personalized legal support, which includes help through the UK asylum application process.

AI tech is not just helpful to the government but also to international organisations taking care of international migration. Some organizations are already leveraging machine learning in association with biometric technology. IOM has introduced the Big Data for Migration Alliance project, which intends to use different technologies in international migration….(More)”.

The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance


Book by Steven Feldstein: “The world is undergoing a profound set of digital disruptions that are changing the nature of how governments counter dissent and assert control over their countries. While increasing numbers of people rely primarily or exclusively on online platforms, authoritarian regimes have concurrently developed a formidable array of technological capabilities to constrain and repress their citizens.

In The Rise of Digital Repression, Steven Feldstein documents how the emergence of advanced digital tools bring new dimensions to political repression. Presenting new field research from Thailand, the Philippines, and Ethiopia, he investigates the goals, motivations, and drivers of these digital tactics. Feldstein further highlights how governments pursue digital strategies based on a range of factors: ongoing levels of repression, political leadership, state capacity, and technological development. The international community, he argues, is already seeing glimpses of what the frontiers of repression look like. For instance, Chinese authorities have brought together mass surveillance, censorship, DNA collection, and artificial intelligence to enforce their directives in Xinjiang. As many of these trends go global, Feldstein shows how this has major implications for democracies and civil society activists around the world.

A compelling synthesis of how anti-democratic leaders harness powerful technology to advance their political objectives, The Rise of Digital Repression concludes by laying out innovative ideas and strategies for civil society and opposition movements to respond to the digital autocratic wave….(More)”.

Learning Policy, Doing Policy: Interactions Between Public Policy Theory, Practice and Teaching


Open Access Book edited by: Trish Mercer, Russell Ayres, Brian Head, and John Wanna: “When it comes to policymaking, public servants have traditionally learned ‘on the job’, with practical experience and tacit knowledge valued over theory-based learning and academic analysis. Yet increasing numbers of public servants are undertaking policy training through postgraduate qualifications and/or through short courses in policy training.

Learning Policy, Doing Policy explores how policy theory is understood by practitioners and how it influences their practice. The book brings together insights from research, teaching and practice on an issue that has so far been understudied. Contributors include Australian and international policy scholars, and current and former practitioners from government agencies. The first part of the book focuses on theorising, teaching and learning about the policymaking process; the second part outlines how current and former practitioners have employed policy process theory in the form of models or frameworks to guide and analyse policymaking in practice; and the final part examines how policy theory insights can assist policy practitioners.

In exploring how policy process theory is developed, taught and taken into policymaking practice, Learning Policy, Doing Policy draws on the expertise of academics and practitioners, and also ‘pracademics’ who often serve as a bridge between the academy and government. It draws on a range of both conceptual and applied examples. Its themes are highly relevant for both individuals and institutions, and reflect trends towards a stronger professional ethos in the Australian Public Service. This book is a timely resource for policy scholars, teaching academics, students and policy practitioners….(More)”

Citizen assembly takes on Germany’s climate pledges


Martin Kuebler at Deutsche Welle: “A group of 160 German citizens chosen at random from across the country will launch an experiment in participatory democracy this week, aiming to inspire public debate and get the government to follow through with its pledge to reach net-zero CO2 emissions by 2050.

The Bürgerrat Klima, or Citizen Assembly, will follow the example set in the last few years by countries like Ireland, the United Kingdom and France. The concept, intended to directly involve citizens in the climate decisions that will shape their lives in the coming decades, is seen as a way for people to push for stronger climate policies and political action — though the previous experiments abroad have met with varying degrees of success.

Inspired by a 99-person Citizens’ Assembly, the Irish government adopted a series of reforms in its 2019 climate bill aimed at reducing carbon dioxide emissions by 51% before the end of this decade. These included recommendations “to ensure climate change is at the centre of policy-making,” and covered everything from clean tech and power generation to electric vehicles and plans to retrofit older buildings.

But in France, where 150 participants submitted bold proposals that included a ban on domestic flights and making ecocide a crime, lawmakers have been less enthusiastic about taking the measures on board. A new climate and resilience bill, which aims to cut France’s CO2 emissions by 40% over the next decade and is due to be adopted later this year, has incorporated less than half of the group’s ideas. Greenpeace has said the proposed bill would have been “ambitious 15 or 20 years ago.”…(More)”.

Bridging the data-policy gap in Africa


Report by PARIS21 and the Mo Ibrahim Foundation (MIF): “National statistics are an essential component of policymaking: they provide the evidence required to design policies that address the needs of citizens, to monitor results and hold governments to account. Data and policy are closely linked. As Mo Ibrahim puts it: “without data, governments drive blind”. However, there is evidence that the capacity of African governments for data-driven policymaking remains limited by a wide data-policy gap.

What is the data-policy gap?
On the data side, statistical capacity across the continent has improved in recent decades. However, it remains low compared to other world regions and is hindered by several challenges. African national statistical offices (NSOs) often lack adequate financial and human resources as well as the capacity to provide accessible and available data. On the policy side, data literacy as well as a culture of placing data first in policy design and monitoring are still not widespread. Thus, investing in the basic building blocks of national statistics, such as civil registration, is often not a key priority.

At the same time, international development frameworks, such as the United Nations 2030 Agenda for Sustainable Development and the African Union Agenda 2063, require that every signatory country produce and use high-quality, timely and disaggregated data in order to shape development policies that leave no one behind and to fulfil reporting commitments.

Also, the new data ecosystem linked to digital technologies is providing an explosion of data sourced from non-state providers. Within this changing data landscape, African NSOs, like those in many other parts of the world, are confronted with a new data stewardship role. This will add further pressure on the capacity of NSOs, and presents additional challenges in terms of navigating issues of governance and use…

Recommendations as part of a six-point roadmap for bridging the data-policy map include:

  1. Creating a statistical capacity strategy to raise funds
  2. Connecting to knowledge banks to hire and retain talent
  3. Building good narratives for better data use
  4. Recognising the power of foundational data
  5. Strengthening statistical laws to harness the data revolution
  6. Encouraging data use in policy design and implementation…(More)”

Why bad times call for good data


Tim Harford in the Financial Times: “Watching the Ever Given wedge itself across the Suez Canal, it would have taken a heart of stone not to laugh. But it was yet another unpleasant reminder that the unseen gears in our global economy can all too easily grind or stick.

From the shutdown of Texas’s plastic polymer manufacturing to a threat, to vaccine production from a shortage of giant plastic bags, we keep finding out the hard way that modern life relies on weak links in surprising places.

So where else is infrastructure fragile and taken for granted? I worry about statistical infrastructure — the standards and systems we rely on to collect, store and analyse our data.

Statistical infrastructure sounds less important than a bridge or a power line, but it can mean the difference between life and death for millions. Consider Recovery (Randomised Evaluations of Covid-19 Therapy). Set up in a matter of days by two Oxford academics, Martin Landray and Peter Horby, over the past year Recovery has enlisted hospitals across the UK to run randomised trials of treatments such as the antimalarial drug hydroxychloroquine and the cheap steroid dexamethasone.

With minimal expense and paperwork, it turned the guesses of physicians into simple but rigorous clinical trials. The project quickly found that dexamethasone was highly effective as a treatment for severe Covid-19, thereby saving a million lives.

Recovery relied on data accumulated as hospitals treated patients and updated their records. It wasn’t always easy to reconcile the different sources — some patients were dead according to one database and alive on another. But such data problems are solvable and were solved. A modest amount of forethought about collecting the right data in the right way has produced enormous benefits….

But it isn’t just poor countries that have suffered. In the US, data about Covid-19 testing was collected haphazardly by states. This left the federal government flying blind, unable to see where and how quickly the virus was spreading. Eventually volunteers, led by the journalists Robinson Meyer and Alexis Madrigal of the Covid Tracking Project, put together a serviceable data dashboard. “We have come to see the government’s initial failure here as the fault on which the entire catastrophe pivots,” wrote Meyer and Madrigal in The Atlantic. They are right.

What is more striking is that the weakness was there in plain sight. Madrigal recently told me that the government’s plan for dealing with a pandemic assumed that good data would be available — but did not build the systems to create them. It is hard to imagine a starker example of taking good statistical infrastructure for granted….(More)”.

Global inequality remotely sensed


Paper by M. Usman Mirza et al: “Economic inequality is notoriously difficult to quantify as reliable data on household incomes are missing for most of the world. Here, we show that a proxy for inequality based on remotely sensed nighttime light data may help fill this gap. Individual households cannot be remotely sensed. However, as households tend to segregate into richer and poorer neighborhoods, the correlation between light emission and economic thriving shown in earlier studies suggests that spatial variance of remotely sensed light per person might carry a signal of economic inequality.

To test this hypothesis, we quantified Gini coefficients of the spatial variation in average nighttime light emitted per person. We found a significant relationship between the resulting light-based inequality indicator and existing estimates of net income inequality. This correlation between light-based Gini coefficients and traditional estimates exists not only across countries, but also on a smaller spatial scale comparing the 50 states within the United States. The remotely sensed character makes it possible to produce high-resolution global maps of estimated inequality. The inequality proxy is entirely independent from traditional estimates as it is based on observed light emission rather than self-reported household incomes. Both are imperfect estimates of true inequality. However, their independent nature implies that the light-based proxy could be used to constrain uncertainty in traditional estimates. More importantly, the light-based Gini maps may provide an estimate of inequality where previously no data were available at all….(More)”.

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence


Press Release: “The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustnesssecurity and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation….(More)”.