The Stoplight Battling to End Poverty


Nick Dall at OZY: “Over midafternoon coffees and Fantas, Robyn-Lee Abrahams and Joyce Paulse — employees at my local supermarket in Cape Town, South Africa — tell me how their lives have changed in the past 18 months. “I never dreamed my daughter would go to college,” says Paulse. “But yesterday we went online together and started filling in the forms.”

Abrahams notes how she used to live hand to mouth. “But now I’ve got a savings account, which I haven’t ever touched.” The sacrifice? “I eat less chocolate now.”

Paulse and Abrahams are just two of thousands of beneficiaries of the Poverty Stoplight, a self-evaluation tool that’s now redefining poverty in countries as diverse as Argentina and the U.K.; Mexico and Tanzania; Chile and Papua New Guinea. By getting families to rank their own economic condition red, yellow or green based upon 50 indicators, the Poverty Stoplight gives families the agency to pull themselves out of poverty and offers organizations insight into whether their programs are working.

Social entrepreneur Martín Burt, who founded Fundación Paraguaya 33 years ago to promote entrepreneurship and economic empowerment in Paraguay, developed the first, paper-based prototype of the Poverty Stoplight in 2010 to help the organization’s microfinance clients escape the poverty cycle….Because poverty is multidimensional, “you can have a family with a proper toilet but no savings,” points out Burt. Determining questionnaires span six different aspects of people’s lives, including softer indicators such as community involvement, self-confidence and family violence. The survey, a series of 50 multiple-choice questions with visual cues, is aimed at households, not individuals, because “you cannot get a 10-year-old girl out of poverty in isolation,” says Burt. Confidentiality is another critical component….(More)”.

Walmart wants to track lettuce on the blockchain


Matthew Beedham at TNW: “Walmart is asking all of its leafy greens suppliers to get on blockchain by this time next year.

With instances of E. coli on the rise, particularly in romaine lettuce, Walmart is insisting that its suppliers use blockchain to track and trace products from source to the customer.

Walmart notes that, while health officials at the Centers for Disease Control told Americans have already warned citizens to avoid eating lettuce grown in Yuma, Arizona, it’s near impossible for consumers to know where their greens are coming from.

On one hand this could be a great system for reducing waste. Earlier this year, green grocers had to throw away produce thought to be infected with E. Coli.

The announcement states, “[h]ealth officials at the Centers for Disease Control told Americans to avoid eating lettuce that was grown in Yuma, Arizona”

However, it’s near impossible for consumers to know where their lettuce was grown.

It would seem that most producers and suppliers still rely on paper-based ledgers. As a result, tracking down vital information about where a product came from can be very time consuming.

By which time, it might be too late and many customers might have purchased and consumed infected produce.

If Walmart’s plans come to fruition, it would allow customers to view the entire supply chain of a product at the point of purchase… (More)”

Ethics & Algorithms Toolkit


Toolkit: “Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk.

GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them….We developed this because:

  • We saw a gap. There are many calls to arms and lots of policy papers, one of which was a DataSF research paper, but nothing practitioner-facing with a repeatable, manageable process.
  • We wanted an approach which governments are already familiar with: risk management. By identifing and quantifying levels of risk, we can recommend specific mitigations.. …(More)”.

Making Wage Data Work: Creating a Federal Resource for Evidence and Transparency


Christina Pena at the National Skills Coalition: “Administrative data on employment and earnings, commonly referred to as wage data or wage records, can be used to assess the labor market outcomes of workforce, education, and other programs, providing policymakers, administrators, researchers, and the public with valuable information. However, there is no single readily accessible federal source of wage data which covers all workers. Noting the importance of employment and earnings data to decision makers, the Commission on Evidence-Based Policymaking called for the creation of a single federal source of wage data for statistical purposes and evaluation. They recommended three options for further exploration: expanding access to systems that already exist at the U.S. Census Bureau or the U.S. Department of Health and Human Services (HHS), or creating a new database at the U.S. Department of Labor (DOL).

This paper reviews current coverage and allowable uses, as well as federal and state actions required to make each option viable as a single federal source of wage data that can be accessed by government agencies and authorized researchers. Congress and the President, in conjunction with relevant federal and state agencies, should develop one or more of those options to improve wage information for multiple purposes. Although not assessed in the following review, financial as well as privacy and security considerations would influence the viability of each scenario. Moreover, if a system like the Commission-recommended National Secure Data Service for sharing data between agencies comes to fruition, then a wage system might require additional changes to work with the new service….(More)”

Causal mechanisms and institutionalisation of open government data in Kenya


Paper by Paul W. Mungai: “Open data—including open government data (OGD)—has become a topic of prominence during the last decade. However, most governments have not realised the desired value streams or outcomes from OGD. The Kenya Open Data Initiative (KODI), a Government of Kenya initiative, is no exception with some moments of success but also sustainability struggles. Therefore, the focus for this paper is to understand the causal mechanisms that either enable or constrain institutionalisation of OGD initiatives. Critical realism is ideally suited as a paradigm to identify such mechanisms, but guides to its operationalisation are few. This study uses the operational approach of Bygstad, Munkvold & Volkoff’s six‐step framework, a hybrid approach that melds concepts from existing critical realism models with the idea of affordances. The findings suggest that data demand and supply mechanisms are critical in institutionalising KODI and that, underpinning basic data‐related affordances, are mechanisms engaging with institutional capacity, formal policy, and political support. It is the absence of such elements in the Kenya case which explains why it has experienced significant delays…(More)”.

The role of corporations in addressing AI’s ethical dilemmas


Darrell M. West at Brookings: “In this paper, I examine five AI ethical dilemmas: weapons and military-related applications, law and border enforcement, government surveillance, issues of racial bias, and social credit systems. I discuss how technology companies are handling these issues and the importance of having principles and processes for addressing these concerns. I close by noting ways to strengthen ethics in AI-related corporate decisions.

Briefly, I argue it is important for firms to undertake several steps in order to ensure that AI ethics are taken seriously:

  1. Hire ethicists who work with corporate decisionmakers and software developers
  2. Develop a code of AI ethics that lays out how various issues will be handled
  3. Have an AI review board that regularly addresses corporate ethical questions
  4. Develop AI audit trails that show how various coding decisions have been made
  5. Implement AI training programs so staff operationalizes ethical considerations in their daily work, and
  6. Provide a means for remediation when AI solutions inflict harm or damages on people or organizations….(More)”.

Illuminating GDP


Money and Banking: “GDP figures are ‘man-made’ and therefore unreliable,” reported remarks of Li Keqiang (then Communist Party secretary of the northeastern Chinese province of Liaoning), March 12, 2007.

Satellites are great. It is hard to imagine living without them. GPS navigation is just the tip of the iceberg. Taking advantage of the immense amounts of information collected over decades, scientists have been using satellite imagery to study a broad array of questions, ranging from agricultural land use to the impact of climate change to the geographic constraints on cities (see here for a recent survey).

One of the most well-known economic applications of satellite imagery is to use night-time illumination to enhance the accuracy of various reported measures of economic activity. For example, national statisticians in countries with poor information collection systems can employ information from satellites to improve the quality of their nationwide economic data (see here). Even where governments have relatively high-quality statistics at a national level, it remains difficult and costly to determine local or regional levels of activity. For example, while production may occur in one jurisdiction, the income generated may be reported in another. At a sufficiently high resolution, satellite tracking of night-time light emissions can help address this question (see here).

But satellite imagery is not just an additional source of information on economic activity, it is also a neutral one that is less prone to manipulation than standard accounting data. This makes it is possible to use information on night-time light to monitor the accuracy of official statistics. And, as we suggest later, the willingness of observers to apply a “satellite correction” could nudge countries to improve their own data reporting systems in line with recognized international standards.

As Luis Martínez inquires in his recent paper, should we trust autocrats’ estimates of GDP? Even in relatively democratic countries, there are prominent examples of statistical manipulation (recall the cases of Greek sovereign debt in 2009 and Argentine inflation in 2014). In the absence of democratic checks on the authorities, Martínez finds even greater tendencies to distort the numbers….(More)”.

Constitutional Democracy and Technology in the age of Artificial Intelligence


Paul Nemitz at Royal Society Philosophical Transactions: “Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless internet and the relationship between technology and the law as it has developed in the internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of Democracy, Rule of law and Human Rights by design in AI and a three level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More).

The political origins of transparency reform: insights from the Italian case


Paper by Fabrizio Di Mascio,  Alessandro Natalini and Federica Cacciatore: This research contributes to the expanding literature on the determinants of government transparency. It uncovers the dynamics of transparency in the Italian case, which shows an interesting reform trajectory: until the late 1980s no transparency provisions existed; since then, provisions have dramatically increased under the impulse of changing patterns of political competition.

The analysis of the Italian case highlights that electoral uncertainty for incumbents is a double-edged sword for institutional reform: on the one hand, it incentivizes the adoption of ever-growing transparency provisions; on the other, it jeopardizes the implementation capacity of public agencies by leading to severe administrative burdens….(More)”.

European science funders ban grantees from publishing in paywalled journals


Martin Enserink at Science: “Frustrated with the slow transition toward open access (OA) in scientific publishing, 11 national funding organizations in Europe turned up the pressure today. As of 2020, the group, which jointly spends about €7.6 billion on research annually, will require every paper it funds to be freely available from the moment of publication. In a statement, the group said it will no longer allow the 6- or 12-month delays that many subscription journals now require before a paper is made OA, and it won’t allow publication in so-called hybrid journals, which charge subscriptions but also make individual papers OA for an extra fee.

The move means grantees from these 11 funders—which include the national funding agencies in the United Kingdom, the Netherlands, and France as well as Italy’s National Institute for Nuclear Physics—will have to forgo publishing in thousands of journals, including high-profile ones such as NatureScienceCell, and The Lancet, unless those journals change their business model. “We think this could create a tipping point,” says Marc Schiltz, president of Science Europe, the Brussels-based association of science organizations that helped coordinate the plan. “Really the idea was to make a big, decisive step—not to come up with another statement or an expression of intent.”

The announcement delighted many OA advocates. “This will put increased pressure on publishers and on the consciousness of individual researchers that an ecosystem change is possible,” says Ralf Schimmer, head of Scientific Information Provision at the Max Planck Digital Library in Munich, Germany. Peter Suber, director of the Harvard Library Office for Scholarly Communication, calls the plan “admirably strong.” Many other funders support OA, but only the Bill & Melinda Gates Foundation applies similarly stringent requirements for “immediate OA,” Suber says. The European Commission and the European Research Council support the plan; although they haven’t adopted similar requirements for the research they fund, a statement by EU Commissioner for Research, Science and Innovation Carlos Moedas suggests they may do so in the future and urges the European Parliament and the European Council to endorse the approach….(More)”.