Making the Case for Evidence-Based Decision-Making


Jennifer Brooks in Stanford Social Innovation Review: “After 15 years of building linkages between evidence, policy, and practice in social programs for children and families, I have one thing to say about our efforts to promote evidence-based decision-making: We have failed to capture the hearts and minds of the majority of decision-makers in the United States.

I’ve worked with state and federal leadership, as well as program administrators in the public and nonprofit spheres. Most of them just aren’t with us. They aren’t convinced that the payoffs of evidence-based practice (the method that uses rigorous tests to assess the efficacy of a given intervention) are worth the extra difficulty or expense of implementing those practices.

Why haven’t we gotten more traction for evidence-based decision-making? Three key reasons: 1) we have wasted time debating whether randomized control trials are the optimal approach, rather than building demand for more data-based decision-making; 2) we oversold the availability of evidence-based practices and underestimated what it takes to scale them; and 3) we did all this without ever asking what problems decision-makers are trying to solve.

If we want to gain momentum for evidence-based practice, we need to focus more on figuring out how to implement such approaches on a larger scale, in a way that uses data to improve programs on an ongoing basis….

We must start by understanding and analyzing the problem the decision-maker wants to solve. We need to offer more than lists of evidence-based strategies or interventions. What outcomes do the decision-makers want to achieve? And what do data tell us about why we aren’t getting those outcomes with current methods?…

None of the following ideas is rocket science, nor am I the first person to say them, but they do suggest ways that we can move beyond our current approaches in promoting evidence-based practice.

1. We need better data.

As Michele Jolin pointed out recently, few federal programs have sufficient resources to build or use evidence. There are limited resources for evaluation and other evidence-building activities, which too often are seen as “extras.” Moreover, many programs at the local, state, and national level have minimal information to use for program management and even fewer staff with the skills required to use it effectively…

 

2. We should attend equally to practices and to the systems in which they sit.

Systems improvements without changes in practice won’t get outcomes, but without systems reforms, evidence-based practices will have difficulty scaling up. …

3. You get what you pay for.

One fear I have is that we don’t actually know whether we can get better outcomes in our public systems without spending more money. And yet cost-savings seem to be what we promise when we sell the idea of evidence-based practice to legislatures and budget directors….

4. We need to hold people accountable for program results and promote ongoing improvement.

There is an inherent tension between using data for accountability and using it for program improvement….(More)”

The Crowd is Always There: A Marketplace for Crowdsourcing Crisis Response


Presentation by Patrick Meier at the Emergency Social Data Summit organized by the Red Cross …on “Collaborative Crisis Mapping” (the slides are available here): “What I want to expand on is the notion of a “marketplace for crowdsourcing” that I introduced at the Summit. The idea stems from my experience in the field of conflict early warning, the Ushahidi-Haiti deployment and my observations of the Ushahidi-DC and Ushahidi-Russia initiatives.

The crowd is always there. Paid Search & Rescue (SAR) teams and salaried emergency responders aren’t. Nor can they be on the corners of every street, whether that’s in Port-au-Prince, Haiti, Washington DC or Sukkur, Pakistan. But the real first responders, the disaster affected communities, are always there. Moreover, not all communities are equally affected by a crisis. The challenge is to link those who are most affected with those who are less affected (at least until external help arrives).

This is precisely what PIC Net and the Washington Post did when they  partnered to deploy this Ushahidi platform in response to the massive snow storm that paralyzed Washington DC earlier this year. They provided a way for affected residents to map their needs and for those less affected to map the resources they could share to help others. You don’t need to be a professional disaster response professional to help your neighbor dig out their car.

More recently, friends at Global Voices launched the most ambitious crowdsourcing initiative in Russia in response to the massive forest fires. But they didn’t use this Ushahidi platform to map the fires. Instead, they customized the public map so that those who needed help could find those who wanted to help. In effect, they created an online market place to crowdsource crisis response. You don’t need professional certification in disaster response to drive someone’s grandparents to the next town over.

There’s a lot that disaster affected populations can (and already do) to help each other out in times of crisis. What may help is to combine the crowdsourcing of crisis information with what I call crowdfeeding in order to create an efficient market place for crowdsourcing response. By crowdfeeding, I mean taking crowdsourced information and feeding it right back to the crowd. Surely they need that information as much if not more than external, paid responders who won’t get to the scene for hours or days….(More)”

Esri, Waze Partnership: A Growing Trend in Sharing Data for the Benefit of All?


Justine Brown at GovTech: “Esri and Waze announced in mid-October that they’re partnering to help local governments alleviate traffic congestion and analyze congestion patterns. Called the Waze Connected Citizens Program, the program — which enables local governments that use the Esri ArcGIS platform to exchange publicly available traffic data with Waze — may represent a growing trend in which citizens and government share data for the benefit of all.

Connecting Esri and Waze data will allow cities to easily share information about the conditions of their roads with drivers, while drivers anonymously report accidents, potholes and other road condition information back to the cities. Local governments can then merge that data into their existing emergency dispatch and street maintenance systems….

Through the Connected Citizen program, Waze shares two main data sets with its government partners: Jams and Alerts….If there’s a major traffic jam in an unusual area, a traffic management center operator might be triggered to examine that area further. For example, Boston recently used Waze jam data to identify a couple of traffic-prone intersections in the Seaport district….Similarly if a Waze user reports a crash, that information shows up on the city’s existing ArcGIS map. City personnel can assess the crash and combine the Waze data with its existing data sets, if desired. The city can then notify emergency response, for example, to address the accident and send out emergency vehicles if necessary….

The Connected Citizen Program could also provide local governments an alternative to IoT investments, because a city can utilize real-time reports from the road rather than investing in sensors and IoT infrastructure. The Kentucky Transportation Cabinet, for instance, uses data from the Connected Citizen Program in several ways, including to monitor and detect automobile accidents on its roadways….(More)”

Data Literacy – What is it and how can we make it happen?


Introduction by Mark Frank, Johanna Walker, Judie Attard, Alan Tygel of Special Issue on Data Literacy of The Journal of Community Informatics: “With the advent of the Internet and particularly Open Data, data literacy (the ability of non-specialists to make use of data) is rapidly becoming an essential life skill comparable to other types of literacy. However, it is still poorly defined and there is much to learn about how best to increase data literacy both amongst children and adults. This issue addresses both the definition of data literacy and current efforts on increasing and sustaining it. A feature of the issue is the range of contributors. While there are important contributions from the UK, Canada and other Western countries, these are complemented by several papers from the Global South where there is an emphasis on grounding data literacy in context and relating it the issues and concerns of communities. (Full Text: PDF)

See also:

Creating an Understanding of Data Literacy for a Data-driven Society by Annika Wolff, Daniel Gooch, Jose J. Cavero Montaner, Umar Rashid, Gerd Kortuem

Data Literacy defined pro populo: To read this article, please provide a little information by David Crusoe

Data literacy conceptions, community capabilities by Paul Matthews

Urban Data in the primary classroom: bringing data literacy to the UK curriculum by Annika Wolff, Jose J Cavero Montaner, Gerd Kortuem

Contributions of Paulo Freire for a Critical Data Literacy: a Popular Education Approach by Alan Freihof Tygel, Rosana Kirsch

DataBasic: Design Principles, Tools and Activities for Data Literacy Learners by Catherine D’Ignazio, Rahul Bhargava

Perceptions of ICT use in rural Brazil: Factors that impact appropriation among marginalized communities by Paola Prado, J. Alejandro Tirado-Alcaraz, Mauro Araújo Câmara

Graphical Perception of Value Distributions: An Evaluation of Non-Expert Viewers’ Data Literacy by Arkaitz Zubiaga, Brian Mac Namee

How to Hold Algorithms Accountable


Nicholas Diakopoulos and Sorelle Friedler at MIT Technology Review:  Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold them accountable.

Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting a set of principles for accountable algorithms….

Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness.

Responsibility. For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change. This could be as straightforward as giving someone on your technical team the internal power and resources to change the system, making sure that person’s contact information is publicly available.

Explainability. Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public. Explaining risk assessment scores to defendants and their legal counsel would promote greater understanding and help them challenge apparent mistakes or faulty data. Some machine-learning models are more explainable than others, but just because there’s a fancy neural net involved doesn’t mean that a meaningful explanationcan’t be produced.

Accuracy. Algorithms make mistakes, whether because of data errors in their inputs (garbage in, garbage out) or statistical uncertainty in their outputs. The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked. Understanding the nature of errors produced by an algorithmic system can inform mitigation procedures.

Auditability. The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm. Enabling algorithms to be monitored, checked, and criticized would lead to more conscious design and course correction in the event of failure. While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance. Where possible, even limited access (e.g., via an API) would allow the public a valuable chance to audit these socially significant algorithms.

Fairness. As algorithms increasingly make decisions based on historical and societal data, existing biases and historically discriminatory human decisions risk being “baked in” to automated decisions. All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained….(More)”

Big data promise exponential change in healthcare


Gonzalo Viña in the Financial Times (Special Report: ): “When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.

GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.

Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.

This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction….

Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite….

The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.

Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”… (More)

Social Media’s Globe-Shaking Power


…Over much of the last decade, we have seen progressive social movementspowered by the web spring up across the world. There was the Green Revolution in Iran and the Arab Spring in the Middle East and North Africa. In the United States, we saw the Occupy Wall Street movement andthe #BlackLivesMatter protests.

Social networks also played a role in electoral politics — first in the ultimately unsuccessful candidacy of Howard Dean in 2003, and then in the election of the first African-American president in 2008.

Yet now those movements look like the prelude to a wider, tech-powered crack up in the global order. In Britain this year, organizing on Facebook played a major role in the once-unthinkable push to get the country to leave the European Union. In the Philippines, Rodrigo Duterte, a firebrand mayor who was vastly outspent by opponents, managed to marshal a huge army of online supporters to help him win the presidency.

The Islamic State has used social networks to recruit jihadists from around the world to fight in Iraq and Syria, as well as to inspire terrorist attacks overseas.

And in the United States, both Bernie Sanders, a socialist who ran for president as a Democrat, and Mr. Trump, who was once reviled by most members of the party he now leads, relied on online movements to shatter the political status quo.

Why is this all happening now? Clay Shirky, a professor at New York University who has studied the effects of social networks, suggested a few reasons.

One is the ubiquity of Facebook, which has reached a truly epic scale. Last month the company reported that about 1.8 billion people now log on to the service every month. Because social networks feed off the various permutations of interactions among people, they become strikingly more powerful as they grow. With about a quarter of the world’s population now on Facebook, the possibilities are staggering.

“When the technology gets boring, that’s when the crazy social effects get interesting,” Mr. Shirky said.

One of those social effects is what Mr. Shirky calls the “shifting of the Overton Window,” a term coined by the researcher Joseph P. Overton to describe the range of subjects that the mainstream media deems publicly acceptable to discuss.

From about the early 1980s until the very recent past, it was usually considered unwise for politicians to court views deemed by most of society to be out of the mainstream, things like overt calls to racial bias (there were exceptions, of course, like the Willie Horton ad). But the internet shifted that window.

“White ethno nationalism was kept at bay because of pluralistic ignorance,”Mr. Shirky said. “Every person who was sitting in their basement yelling at the TV about immigrants or was willing to say white Christians were more American than other kinds of Americans — they didn’t know how many others shared their views.”

Thanks to the internet, now each person with once-maligned views can see that he’s not alone. And when these people find one another, they can do things — create memes, publications and entire online worlds that bolster their worldview, and then break into the mainstream. The groups also become ready targets for political figures like Mr. Trump, who recognize their energy and enthusiasm and tap into it for real-world victories.

Mr. Shirky notes that the Overton Window isn’t just shifting on the right. We see it happening on the left, too. Mr. Sanders campaigned on an anti-Wall Street platform that would have been unthinkable for a Democrat just a decade ago….(More)”

Using Cloud Computing to Untangle How Trees Can Cool Cities


 at CoolGreenScience: “We’ve all used Google Earth — to explore remote destinations around the world or to check out our house from above. But Google Earth Engine is a valuable tool for conservationists and geographers like myself that allows us to tackle some tricky remote-sensing analysis.

After having completed a few smaller spatial science projects in the cloud (mostly on the Google Earth Engine, or GEE, platform), I decided to give it a real workout — by analyzing more than 300 gigabytes of data across 28 United States and seven Chinese cities.

This project was part of a larger study looking at trees in cities. Why trees? Trees provide numerous valuable ecosystem services to communities: benefits associated with air and water quality, energy conservation, cooler air temperatures, and many other environmental and social benefits.

It’s easy to understand the benefits of trees: stand outside on a hot sunny day and you immediately feel cooler in the shade of a tree. But what’s not as obvious as the cooling effect are tree’s ability to remove particulate matter (PM2.5) floating around in the air we breath. And this important, as this type of air pollution is implicated in the deaths of ~3 million people per year.

The Conservancy researched the relationship between city air quality and the cooling effects of trees. Results of this study will inform the Global Cities Program initiative on Planting Healthy Air for cities ­­— the objective being to show how much trees can clean and cool, how much it will cost, and so forth….(More)”

Is Open Data the Death of FOIA?


Beth Noveck at the Yale Law Journal: “For fifty years, the Freedom of Information Act (FOIA) has been the platinum standard for open government in the United States. The statute is considered the legal bedrock of the public’s right to know about the workings of our government. More than one hundred countries and all fifty states have enacted their own freedom of information laws. At the same time, FOIA’s many limitations have also become evident: a cumbersome process, delays in responses, and redactions that frustrate journalists and other information seekers. Politically-motivated nuisance requests bedevil government agencies.With over 700,000 FOIA requests filed every year, the federal government faces the costs of a mounting backlog.

In recent years, however, an entirely different approach to government transparency in line with the era of big data has emerged: open government data. Open government data —generally shortened to open data—has many definitions but is generally considered to be publicly available information that can be universally and readily accessed, used, and redistributed free of charge in digital form. Open data is not limited to statistics, but also includes text such as the United States Federal Register, the daily newspaper of government, which was released as open data in bulk form in 2010.

To understand how significant the open data movement is for FOIA, this Essay discusses the impact of open data on the institutions and functions of government and the ways open data contrasts markedly with FOIA. Open data emphasizes the proactive publication of whole classes of information. Open data includes data about the workings of government but also data collected by the government about the economy and society posted online in a centralized repository for use by the wider public, including academic users seeking information as the basis for original research and commercial users looking to create new products and services. For example, Pixar used open data from the United States Geological Survey to create more realistic detail in scenes from its movie The Good Dinosaur.

By contrast, FOIA promotes ex post publication of information created by the government especially about its own workings in response to specific demands by individual requestors. I argue that open data’s more systematic and collaborative approach represents a radical and welcome departure from FOIA because open data concentrates on information as a means to solve problems to the end of improving government effectiveness. Open data is legitimated by the improved outcomes it yields and grounded in a theory of government effectiveness and, as a result, eschews the adversarial and ad hoc FOIA approach. Ultimately, however, each tactic offers important complementary benefits. The proactive information disclosure regime of open data is strengthened by FOIA’s rights of legal enforcement. Together, they stand to become the hallmark of government transparency in the fifty years ahead….(More)”.

Comparing resistance to open data performance measurement


Paper by Gregory Michener and Otavio Ritter in Public Administration : “Much is known about governmental resistance to disclosure laws, less so about multi-stakeholder resistance to open data. This study compares open data initiatives within the primary and secondary school systems of Brazil and the UK, focusing on stakeholder resistance and corresponding policy solutions. The analytical framework is based on the ‘Three-Ps’ of open data resistance to performance metrics, corresponding to professional, political, and privacy-related concerns. Evidence shows that resistance is highly nuanced, as stakeholders alternately serve as both principals and agents. School administrators, for example, are simultaneously principals to service providers and teachers, and at once agents to parents and politicians. Relying on a different systems comparison, in-depth interviews, and newspaper content analyses, we find that similar stakeholders across countries demonstrate strikingly divergent levels of resistance. In overcoming stakeholder resistance – across socioeconomic divides – context conscientious ‘data-informed’ evaluations may promote greater acceptance than narrowly ‘data-driven’ performance measurements…(More)”