What kind of Evidence Influences local officials? A great example from Guatemala


Paper  by Walter Flores: “Between 2007 and up to now, we have implemented five different methods for gathering evidence:

1) Surveys of health clinics with random sampling,

2) Surveys using tracers and convenience-based sampling,

3) Life histories of the users of health services,

4) User complaints submitted via text messages,

5) Video and photography documenting service delivery problems.

Each of these methods was deployed for a period of 2-3 years and accompanied by detailed monitoring to track its effects on two outcome variables:

1) the level of community participation in planning, data collection and analysis; and

2) the responsiveness of the authorities to the evidence presented.

Our initial intervention generated evidence by surveying a random sample of health clinics—widely considered to be a highly rigorous method for collecting evidence. As the surveys were long and technically complicated, participation from the community was close to zero. Yet our expectation was that, given its scientific rigor, authorities would be responsive to the evidence we presented. The government instead used technical methodological objections as a pretext to reject the service delivery problems we identified. It was clear that such arguments were an excuse and authorities did not want to act.

Flores fig 1Our next effort was to simplify the survey and involve communities in surveying, analysis, and report writing. However, as the table shows, participation was still “minimal,” as was the responsiveness of the authorities. Many community members still struggled to participate and the authorities rejected the evidence as unreliable, again citing methodological concerns. Together with community leaders, we decided to move away from surveys altogether, so authorities could no longer use technical arguments to disregard the evidence.

For our next method, we introduced collecting life-stories of real patients and users of health services. The decision about this new method was taken together with communities. Community members were trained to identify cases of poor service delivery, interview users, and write down their experiences. These testimonies vividly described the impact of poor health services: children unable to go to school because they needed to attend to sick relatives; sick parents unable to care for young children; breadwinners unable go to work, leaving families destitute.

This type of evidence changed the meetings between community leaders and authorities considerably, shifting from arguments over data to discussing the struggles real people faced due to nonresponsive services. After a year of responding to individual life-stories, however, authorities started to treat the information presented as “isolated cases” and became less responsive.

We regrouped again with community leaders to reflect on how to further boost community participation and achieve a response from authorities. We agreed that more agile and less burdensome methods for community volunteers to collect and disseminate evidence might increase the response from authorities. After reviewing different options, we agreed to build a complaint system that allowed users to send coded text messages to an open-access platform….(More)”.

Plunging response rates to household surveys worry policymakers


The Economist: “Response rates to surveys are plummeting all across the rich world. Last year only around 43% of households contacted by the British government responded to the LFS, down from 70% in 2001 (see chart). In America the share of households responding to the Current Population Survey (CPS) has fallen from 94% to 85% over the same period. The rest of Europe and Canada have seen similar trends.

Poor response rates drain budgets, as it takes surveyors more effort to hunt down interviewees. And a growing reluctance to give interviewers information threatens the quality of the data. Politicians often complain about inaccurate election polls. Increasingly misleading economic surveys would be even more disconcerting.

Household surveys derive their power from randomness. Since it is impractical to get every citizen to complete a long questionnaire regularly, statisticians interview what they hope is a representative sample instead. But some types are less likely to respond than others—people who live in flats not houses, for example. A study by Christopher Bollinger of the University of Kentucky and three others matched data from the CPS with social-security records and found that poorer and very rich households were more likely to ignore surveyors than middle-income ones. Survey results will be skewed if the types who do not answer are different from those who do, or if certain types of people are more loth to answer some questions, or more likely to fib….

Statisticians have been experimenting with methods of improving response rates: new ways to ask questions, or shorter questionnaires, for example. Payment raises response rates, and some surveys offer more money for the most reluctant interviewees. But such persistence can have drawbacks. One study found that more frequent attempts to contact interviewees raised the average response rate, but lowered the average quality of answers.

Statisticians have also been exploring supplementary data sources, including administrative data. Such statistics come with two big advantages. One is that administrative data sets can include many more people and observations than is practical in a household survey, giving researchers the statistical power to run more detailed studies. Another is that governments already collect them, so they can offer huge cost savings over household surveys. For instance, Finland’s 2010 census, which was based on administrative records rather than surveys, cost its government just €850,000 ($1.1m) to produce. In contrast, America’s government spent $12.3bn on its 2010 census, roughly 200 times as much on a per-person basis.

Recent advances in computing mean that vast data sets are no longer too unwieldy for use by researchers. However, in many rich countries (those in Scandinavia are exceptions), socioeconomic statistics are collected by several agencies, meaning that researchers who want to combine, say, health records with tax data, face formidable bureaucratic and legal challenges.

Governments in English-speaking countries are especially keen to experiment. In January HMRC, the British tax authority, started publishing real-time tax data as an “experimental statistic” to be compared with labour-market data from household surveys. Two-fifths of Canada’s main statistical agency’s programmes are based at least in part on administrative records. Last year, Britain passed the Digital Economy Act, which will give its Office of National Statistics (ONS) the right to requisition data from other departments and from private sources for statistics-and-research purposes. America is exploring using such data as part of its 2020 census.

Administrative data also have their limitations (see article). They are generally not designed to be used in statistical analyses. A data set on income taxes might be representative of the population receiving benefits or earning wages, but not the population as a whole. Most important, some things are not captured in administrative records, such as well-being, informal employment and religious affiliation….(More)”.

Data Activism


Special Issue of Krisis: Journal of Contemporary Philosophy: “Digital data increasingly plays a central role in contemporary politics and public life. Citizen voices are increasingly mediated by proprietary social media platforms and are shaped by algorithmic ranking and re-ordering, but data informs how states act, too. This special issue wants to shift the focus of the conversation. Non-governmental organizations, hackers, and activists of all kinds provide a myriad of ‘alternative’ interventions, interpretations, and imaginaries of what data stands for and what can be done with it.

Jonathan Gray starts off this special issue by suggesting how data can be involved in providing horizons of intelligibility and organising social and political life. Helen Kennedy’s contribution advocates for a focus on emotions and everyday lived experiences with data. Lina Dencik puts forward the notion of ‘surveillance realism’ to explore the pervasiveness of contemporary surveillance and the emergence of alternative imaginaries. Stefan Baack investigates how data are used to facilitate civic engagement. Miren Gutiérrez explores how activists can make use of data infrastructures such as databases, servers, and algorithms. Finally, Leah Horgan and Paul Dourish critically engage with the notion of data activism by looking at everyday data work in a local administration. Further, this issue features an interview with Boris Groys by Thijs Lijster, whose work Über das Neue enjoys its 25th anniversary last year. Lastly, three book reviews illuminate key aspects of datafication. Patricia de Vries reviews Metahavens’ Black Transparency; Niels van Doorn writes on Platform Capitalism by Nick Srnicek and Jan Overwijk comments on The Entrepeneurial Self by Ulrich Bröckling….(More)”.

Public Policy in an AI Economy


NBER Working Paper by Austan Goolsbee: “This paper considers the role of policy in an AI-intensive economy (interpreting AI broadly). It emphasizes the speed of adoption of the technology for the impact on the job market and the implications for inequality across people and across places. It also discusses the challenges of enacting a Universal Basic Income as a response to widespread AI adoption, discuss pricing, privacy and competition policy the question of whether AI could improve policy making itself….(More).

Policy experimentation: core concepts, political dynamics, governance and impacts


Article by Dave Huitema, Andrew Jordan, Stefania Munaretto and Mikael Hildén in Policy Sciences: “In the last two decades, many areas of the social sciences have embraced an ‘experimentalist turn’. It is well known for instance that experiments are a key ingredient in the emergence of behavioral economics, but they are also increasingly popular in sociology, political science, planning, and in architecture (see McDermott 2002). It seems that the potential advantages of experiments are better appreciated today than they were in the past.

But the turn towards experimentalism is not without its critics. In her passionate plea for more experimentation in political science for instance, McDermott (2002: 42) observes how many political scientists are hesitant: they are more interested in large-scale multiple regression work, lack training in experimentation, do not see how experiments could fit into a broader research strategy, and alternative movements in political science (such as constructivists and postmodernists) consider that experimental work is not able to capture complexities and nuances. Representing some of these criticisms, Howe (2004) suggests that experimentation is being oversold and highlights various complications, especially the trade-offs that exist between internal and external validity, the fact that causal inferences can be generated using many other research methods, and the difficulty of comparing governance interventions to new medications in medicine….(More)”.

Governance on the Drug Supply Chain via Gcoin Blockchain


Paper by Jen-Hung Tseng et al in the International Journal of Environmental Research and Public Health: “…blockchain was recently introduced to the public to provide an immutable, consensus based and transparent system in the Fintech field. However, there are ongoing efforts to apply blockchain to other fields where trust and value are essential. In this paper, we suggest Gcoin blockchain as the base of the data flow of drugs to create transparent drug transaction data. Additionally, the regulation model of the drug supply chain could be altered from the inspection and examination only model to the surveillance net model, and every unit that is involved in the drug supply chain would be able to participate simultaneously to prevent counterfeit drugs and to protect public health, including patients….(More)”.

How Policymakers Can Foster Algorithmic Accountability


Report by Joshua New and Daniel Castro: “Increased automation with algorithms, particularly through the use of artificial intelligence (AI), offers opportunities for the public and private sectors to complete increasingly complex tasks with a level of productivity and effectiveness far beyond that of humans, generating substantial social and economic benefits in the process. However, many believe an increased use of algorithms will lead to a host of harms, including exacerbating existing biases and inequalities, and have therefore called for new public policies, such as establishing an independent commission to regulate algorithms or requiring companies to explain publicly how their algorithms make decisions. Unfortunately, all of these proposals would lead to less AI use, thereby hindering social and economic progress.

Policymakers should reject these proposals and instead support algorithmic decision-making by promoting policies that ensure its robust development and widespread adoption. Like any new technology, there are strong incentives among both developers and adopters to improve algorithmic decision-making and ensure its applications do not contain flaws, such as bias, that reduce their effectiveness. Thus, rather than establish a master regulatory framework for all algorithms, policymakers should do what they have always done with regard to technology regulation: enact regulation only where it is required, targeting specific harms in particular application areas through dedicated regulatory bodies that are already charged with oversight of that particular sector. To accomplish this, regulators should pursue algorithmic accountability—the principle that an algorithmic system should employ a variety of controls to ensure the operator (i.e., the party responsible for deploying the algorithm) can verify it acts in accordance with its intentions, as well as identify and rectify harmful outcomes. Adopting this framework would both promote the vast benefits of algorithmic decision-making and minimize harmful outcomes, while also ensuring laws that apply to human decisions can be effectively applied to algorithmic decisions….(More)”.

Using Collaborative Crowdsourcing to Give Voice to Diverse Communities


Dennis Di Lorenzo at Campus Technology: “Universities face many critical challenges — student retention, campus safety, curriculum development priorities, alumni engagement and fundraising, and inclusion of diverse populations. In my role as dean of the New York University School of Professional Studies (NYUSPS) for the past four years, and in my prior 20 years of employment in senior-level positions within the school and at NYU, I have become intimately familiar with the complexities and the nuances of such multifaceted challenges.

For the past two years, one of our top priorities at NYUSPS has been striving to address sensitive issues regarding diversity and inclusion….

To identify and address the issues we saw arising from the shifting dynamics we were encountering in our classrooms, my team initially set about gathering feedback from NYUSPS faculty members and students through roundtable discussions. Though many individuals participated in these, we sensed that some were anxious and unwilling to fully share their experiences. We were able to initiate some productive conversations; however, we found they weren’t getting to the heart of the matter. To provide a sense of anonymity that would allow members of the NYUSPS community to express their concerns more freely, we identified a collaboration tool called POPin and utilized it to conduct a series of crowdsourcing campaigns that commenced with faculty members and then proceeded on to students.

Fostering Vital Conversations

Using POPin’s online discussion tool, we were able to scale an intimate and sensitive conversation up to include more than 4,500 students and 2,100 faculty members from a wide variety of countries, cultural and religious backgrounds, gender and sexual identities, economic classes and life stages. Because the tool’s feedback mechanism is both anonymous and interactive, the scope and quality of the conversations increased dramatically….(More)”.

EU ministers endorse Commission’s plans for research cloud


European Commission: “The European Open Science Cloud, which will support EU science in its global leading by creating a trusted environment for hosting and processing research data, is one important step closer to becoming a reality. Meeting in Brussels today, EU research ministers endorsed the roadmap for its creation. The Conclusions of the Competitiveness Council, proposed by the current Bulgarian Presidency of the Council of the EU, are the result of two years of intense negotiations….

According to Commissioner Moedas, much remains to be done to make the EOSC a reality by 2020, but several important aspects stand out:

  1. the Cloud should be a wide, pan-European federation of existing and emerging excellent infrastructures, which respects the governance and funding mechanisms of its components;
  2. membership in this federation would be voluntary; and
  3. the governance structure would include member state ministries, stakeholders and scientists.

 

…In another important step for Open Science, the Commission published today the final recommendations of the Open Science Policy Platform. Established in 2016, the platform comprises important stakeholders who advise the Commission on how to further develop and practically implement Open Science policy in order to improve radically the quality and impact of European science….(More)”.

Don’t Fight Regulation. Reprogram It


Article by Alison Kutler and Antonio Sweet: “Businesspeople too often assume that the relationship between government and the private sector is (and should be) adversarial. They imagine two opposing forces, each setting their bounds of control. But if you can envision government and business as platforms that interact with one other, it becomes apparent why the word code applies to both technology and law. A successful business leader works with regulation the way a successful app developer works with another company’s operating system: testing it, providing innovative ways to get results within the system’s constraints, and offering guidance, where possible, to help make the system more efficient, more fair, and more valuable to end-users.

Like the computer language of an operating system, legal and regulatory codes follow rules designed to make them widely recognizable to those who are appropriately trained. As legislators, regulators, and other officials write that code, they seek input from stakeholders through hearings and public-comment filings on proposed rules. Policymakers rely on constituents, public filings, and response analysis the way software designers use beta testers, crash reports, and developer feedback — to debug and validate code before deploying it across the entire system.

Unfortunately, policymakers and business leaders don’t always embrace what software developers know about collaborative innovation. Think about how much less a smartphone could do if its manufacturers never worked closely with people outside of their engineering department. When only a small subset of voices are involved, the final code only reflects the needs of the most vocal groups. As a result, the unengaged are stuck with a system that doesn’t take into account their needs, or worse, disables their product.

Policymakers may also benefit by emulating the kind of interoperability that makes software effective. When enterprise systems are too different from each other, people struggle with system unfamiliarity. They also run into interoperability issues when trying to function across multiple systems. A product development team can devote massive amounts of resources to designing and building something to work perfectly in one operating system domain, only to have it slow down or completely freeze in another…(More)”.