Data to the Rescue: Smart Ways of Doing Good


Nicole Wallace in the Chronicle of Philanthropy: “For a long time, data served one purpose in the nonprofit world: measuring program results. But a growing number of charities are rejecting the idea that data equals evaluation and only evaluation.

Of course, many nonprofits struggle even to build the simplest data system. They have too little money, too few analysts, and convoluted data pipelines. Yet some cutting-edge organizations are putting data to work in new and exciting ways that drive their missions. A prime example: The Polaris Project is identifying criminal networks in the human-trafficking underworld and devising strategies to fight back by analyzing its data storehouse along with public information.

Other charities dive deep into their data to improve services, make smarter decisions, and identify measures that predict success. Some have such an abundance of information that they’re even pruning their collection efforts to allow for more sophisticated analysis.

The groups highlighted here are among the best nationally. In their work, we get a sneak peek at how the data revolution might one day achieve its promise.

House Calls: Living Goods

Living Goods launched in eastern Africa in 2007 with an innovative plan to tackle health issues in poor families and reduce deaths among children. The charity provides loans, training, and inventory to locals in Uganda and Kenya — mostly women — to start businesses selling vitamins, medicine, and other health products to friends and neighbors.

Founder Chuck Slaughter copied the Avon model and its army of housewives-turned-sales agents. But in recent years, Living Goods has embraced a 21st-century data system that makes its entrepreneurs better health practitioners. Armed with smartphones, they confidently diagnose and treat major illnesses. At the same time, they collect information that helps the charity track health across communities and plot strategy….

Unraveling Webs of Wickedness: Polaris Project

Calls and texts to the Polaris Project’s national human-trafficking hotline are often heartbreaking, terrifying, or both.

Relatives fear that something terrible has happened to a missing loved one. Trafficking survivors suffering from their ordeal need support. The most harrowing calls are from victims in danger and pleading for help.

Last year more than 5,500 potential cases of exploitation for labor or commercial sex were reported to the hotline. Since it got its start in 2007, the total is more than 24,000.

As it helps victims and survivors get the assistance they need, the Polaris Project, a Washington nonprofit, is turning those phone calls and texts into an enormous storehouse of information about the shadowy world of trafficking. By analyzing this data and connecting it with public sources, the nonprofit is drawing detailed pictures of how trafficking networks operate. That knowledge, in turn, shapes the group’s prevention efforts, its policy work, and even law-enforcement investigations….

Too Much Information: Year Up

Year Up has a problem that many nonprofits can’t begin to imagine: It collects too much data about its program. “Predictive analytics really start to stink it up when you put too much in,” says Garrett Yursza Warfield, the group’s director of evaluation.

What Mr. Warfield describes as the “everything and the kitchen sink” problem started soon after Year Up began gathering data. The group, which fights poverty by helping low-income young adults land entry-level professional jobs, first got serious about measuring its work nearly a decade ago. Though challenged at first to round up even basic information, the group over time began tracking virtually everything it could: the percentage of young people who finish the program, their satisfaction, their paths after graduation through college or work, and much more.

Now the nonprofit is diving deeper into its data to figure out which measures can predict whether a young person is likely to succeed in the program. And halfway through this review, it’s already identified and eliminated measures that it’s found matter little. A small example: Surveys of participants early in the program asked them to rate their proficiency at various office skills. Those self-evaluations, Mr. Warfield’s team concluded, were meaningless: How can novice professionals accurately judge their Excel spreadsheet skills until they’re out in the working world?…

On the Wild Side: Wildnerness Society…Without room to roam, wild animals and plants breed among themselves and risk losing genetic diversity. They also fall prey to disease. And that’s in the best of times. As wildlife adapt to climate change, the chance to migrate becomes vital even to survival.

National parks and other large protected areas are part of the answer, but they’re not enough if wildlife can’t move between them, says Travis Belote, lead ecologist at the Wilderness Society.

“Nature needs to be able to shuffle around,” he says.

Enter the organization’s Wildness Index. It’s a national map that shows the parts of the country most touched by human activity as well as wilderness areas best suited for wildlife. Mr. Belote and his colleagues created the index by combining data on land use, population density, road location and size, water flows, and many other factors. It’s an important tool to help the nonprofit prioritize the locations it fights to protect.

In Idaho, for example, the nonprofit compares the index with information about known wildlife corridors and federal lands that are unprotected but meet the criteria for conservation designation. The project’s goal: determine which areas in the High Divide — a wild stretch that connects Greater Yellowstone with other protected areas — the charity should advocate to legally protect….(More)”

Selected Readings on Data and Humanitarian Response


By Prianka Srinivasan and Stefaan G. Verhulst *

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data and humanitarian response was originally published in 2016.

Data, when used well in a trusted manner, allows humanitarian organizations to innovate how to respond to emergency events, including better coordination of post-disaster relief efforts, the ability to harness local knowledge to create more targeted relief strategies, and tools to predict and monitor disasters in real time. Consequently, in recent years both multinational groups and community-based advocates have begun to integrate data collection and evaluation strategies into their humanitarian operations, to better and more quickly respond to emergencies. However, this movement poses a number of challenges. Compared to the private sector, humanitarian organizations are often less equipped to successfully analyze and manage big data, which pose a number of risks related to the security of victims’ data. Furthermore, complex power dynamics which exist within humanitarian spaces may be further exacerbated through the introduction of new technologies and big data collection mechanisms. In the below we share:

  • Selected Reading List (summaries and hyperlinks)
  • Annotated Selected Reading List
  • Additional Readings

Selected Reading List  (summaries in alphabetical order)

Data and Humanitarian Response

Risks of Using Big Data in Humanitarian Context

Annotated Selected Reading List (in alphabetical order)

Karlsrud, John. “Peacekeeping 4.0: Harnessing the Potential of Big Data, Social Media, and Cyber Technologies.” Cyberspace and International Relations, 2013. http://bit.ly/235Qb3e

  • This chapter from the book “Cyberspace and International Relations” suggests that advances in big data give humanitarian organizations unprecedented opportunities to prevent and mitigate natural disasters and humanitarian crises. However, the sheer amount of unstructured data necessitates effective “data mining” strategies for multinational organizations to make the most use of this data.
  • By profiling some civil-society organizations who use big data in their peacekeeping efforts, Karlsrud suggests that these community-focused initiatives are leading the movement toward analyzing and using big data in countries vulnerable to crisis.
  • The chapter concludes by offering ten recommendations to UN peacekeeping forces to best realize the potential of big data and new technology in supporting their operations.

Mancini, Fancesco. “New Technology and the prevention of Violence and Conflict.” International Peace Institute, 2013. http://bit.ly/1ltLfNV

  • This report from the International Peace Institute looks at five case studies to assess how information and communications technologies (ICTs) can help prevent humanitarian conflicts and violence. Their findings suggest that context has a significant impact on the ability for these ICTs for conflict prevention, and any strategies must take into account the specific contingencies of the region to be successful.
  • The report suggests seven lessons gleaned from the five case studies:
    • New technologies are just one in a variety of tools to combat violence. Consequently, organizations must investigate a variety of complementary strategies to prevent conflicts, and not simply rely on ICTs.
    • Not every community or social group will have the same relationship to technology, and their ability to adopt new technologies are similarly influenced by their context. Therefore, a detailed needs assessment must take place before any new technologies are implemented.
    • New technologies may be co-opted by violent groups seeking to maintain conflict in the region. Consequently, humanitarian groups must be sensitive to existing political actors and be aware of possible negative consequences these new technologies may spark.
    • Local input is integral to support conflict prevention measures, and there exists need for collaboration and awareness-raising with communities to ensure new technologies are sustainable and effective.
    • Information shared between civil-society has more potential to develop early-warning systems. This horizontal distribution of information can also allow communities to maintain the accountability of local leaders.

Meier, Patrick. “Digital humanitarians: how big data is changing the face of humanitarian response.” Crc Press, 2015. http://amzn.to/1RQ4ozc

  • This book traces the emergence of “Digital Humanitarians”—people who harness new digital tools and technologies to support humanitarian action. Meier suggests that this has created a “nervous system” to connect people from disparate parts of the world, revolutionizing the way we respond to humanitarian crises.
  • Meier argues that such technology is reconfiguring the structure of the humanitarian space, where victims are not simply passive recipients of aid but can contribute with other global citizens. This in turn makes us more humane and engaged people.

Robertson, Andrew and Olson, Steve. “Using Data Sharing to Improve Coordination in Peacebuilding.” United States Institute for Peace, 2012. http://bit.ly/235QuLm

  • This report functions as an overview of a roundtable workshop on Technology, Science and Peace Building held at the United States Institute of Peace. The workshop aimed to investigate how data-sharing techniques can be developed for use in peace building or conflict management.
  • Four main themes emerged from discussions during the workshop:
    • “Data sharing requires working across a technology-culture divide”—Data sharing needs the foundation of a strong relationship, which can depend on sociocultural, rather than technological, factors.
    • “Information sharing requires building and maintaining trust”—These relationships are often built on trust, which can include both technological and social perspectives.
    • “Information sharing requires linking civilian-military policy discussions to technology”—Even when sophisticated data-sharing technologies exist, continuous engagement between different stakeholders is necessary. Therefore, procedures used to maintain civil-military engagement should be broadened to include technology.
    • “Collaboration software needs to be aligned with user needs”—technology providers need to keep in mind the needs of its users, in this case peacebuilders, in order to ensure sustainability.

United Nations Independent Expert Advisory Group on a Data Revolution for Sustainable Development. “A World That Counts, Mobilizing the Data Revolution.” 2014. https://bit.ly/2Cb3lXq

  • This report focuses on the potential benefits and risks data holds for sustainable development. Included in this is a strategic framework for using and managing data for humanitarian purposes. It describes a need for a multinational consensus to be developed to ensure data is shared effectively and efficiently.
  • It suggests that “people who are counted”—i.e., those who are included in data collection processes—have better development outcomes and a better chance for humanitarian response in emergency or conflict situations.

Katie Whipkey and Andrej Verity. “Guidance for Incorporating Big Data into Humanitarian Operations.” Digital Humanitarian Network, 2015. http://bit.ly/1Y2BMkQ

  • This report produced by the Digital Humanitarian Network provides an overview of big data, and how humanitarian organizations can integrate this technology into their humanitarian response. It primarily functions as a guide for organizations, and provides concise, brief outlines of what big data is, and how it can benefit humanitarian groups.
  • The report puts forward four main benefits acquired through the use of big data by humanitarian organizations: 1) the ability to leverage real-time information; 2) the ability to make more informed decisions; 3) the ability to learn new insights; 4) the ability for organizations to be more prepared.
  • It goes on to assess seven challenges big data poses for humanitarian organizations: 1) geography, and the unequal access to technology across regions; 2) the potential for user error when processing data; 3) limited technology; 4) questionable validity of data; 5) underdeveloped policies and ethics relating to data management; 6) limitations relating to staff knowledge.

Risks of Using Big Data in Humanitarian Context
Crawford, Kate, and Megan Finn. “The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters.” GeoJournal 80.4, 2015. http://bit.ly/1X0F7AI

  • Crawford & Finn present a critical analysis of the use of big data in disaster management, taking a more skeptical tone to the data revolution facing humanitarian response.
  • They argue that though social and mobile data analysis can yield important insights and tools in crisis events, it also presents a number of limitations which can lead to oversights being made by researchers or humanitarian response teams.
  • Crawford & Finn explore the ethical concerns the use of big data in disaster events introduces, including issues of power, privacy, and consent.
  • The paper concludes by recommending that critical data studies, such as those presented in the paper, be integrated into crisis event research in order to analyze some of the assumptions which underlie mobile and social data.

Jacobsen, Katja Lindskov (2010) Making design safe for citizens: A hidden history of humanitarian experimentation. Citizenship Studies 14.1: 89-103. http://bit.ly/1YaRTwG

  • This paper explores the phenomenon of “humanitarian experimentation,” where victims of disaster or conflict are the subjects of experiments to test the application of technologies before they are administered in greater civilian populations.
  • By analyzing the particular use of iris recognition technology during the repatriation of Afghan refugees to Pakistan in 2002 to 2007, Jacobsen suggests that this “humanitarian experimentation” compromises the security of already vulnerable refugees in order to better deliver biometric product to the rest of the world.

Responsible Data Forum. “Responsible Data Reflection Stories: An Overview.” http://bit.ly/1Rszrz1

  • This piece from the Responsible Data forum is primarily a compilation of “war stories” which follow some of the challenges in using big data for social good. By drawing on these crowdsourced cases, the Forum also presents an overview which makes key recommendations to overcome some of the challenges associated with big data in humanitarian organizations.
  • It finds that most of these challenges occur when organizations are ill-equipped to manage data and new technologies, or are unaware about how different groups interact in digital spaces in different ways.

Sandvik, Kristin Bergtora. “The humanitarian cyberspace: shrinking space or an expanding frontier?” Third World Quarterly 37:1, 17-32, 2016. http://bit.ly/1PIiACK

  • This paper analyzes the shift toward more technology-driven humanitarian work, where humanitarian work increasingly takes place online in cyberspace, reshaping the definition and application of aid. This has occurred along with what many suggest is a shrinking of the humanitarian space.
  • Sandvik provides three interpretations of this phenomena:
    • First, traditional threats remain in the humanitarian space, which are both modified and reinforced by technology.
    • Second, new threats are introduced by the increasing use of technology in humanitarianism, and consequently the humanitarian space may be broadening, not shrinking.
    • Finally, if the shrinking humanitarian space theory holds, cyberspace offers one example of this, where the increasing use of digital technology to manage disasters leads to a contraction of space through the proliferation of remote services.

Additional Readings on Data and Humanitarian Response

* Thanks to: Kristen B. Sandvik; Zara Rahman; Jennifer Schulte; Sean McDonald; Paul Currion; Dinorah Cantú-Pedraza and the Responsible Data Listserve for valuable input.

Elements of a New Ethical Framework for Big Data Research


The Berkman Center is pleased to announce the publication of a new paper from the Privacy Tools for Sharing Research Data project team. In this paper, Effy Vayena, Urs Gasser, Alexandra Wood, and David O’Brien from the Berkman Center, with Micah Altman from MIT Libraries, outline elements of a new ethical framework for big data research.

Emerging large-scale data sources hold tremendous potential for new scientific research into human biology, behaviors, and relationships. At the same time, big data research presents privacy and ethical challenges that the current regulatory framework is ill-suited to address. In light of the immense value of large-scale research data, the central question moving forward is not whether such data should be made available for research, but rather how the benefits can be captured in a way that respects fundamental principles of ethics and privacy.

The authors argue that a framework with the following elements would support big data utilization and help harness the value of big data in a sustainable and trust-building manner:

  • Oversight should aim to provide universal coverage of human subjects research, regardless of funding source, across all stages of the information lifecycle.

  • New definitions and standards should be developed based on a modern understanding of privacy science and the expectations of research subjects.

  • Researchers and review boards should be encouraged to incorporate systematic risk-benefit assessments and new procedural and technological solutions from the wide range of interventions that are available.

  • Oversight mechanisms and the safeguards implemented should be tailored to the intended uses, benefits, threats, harms, and vulnerabilities associated with a specific research activity.

Development of a new ethical framework with these elements should be the product of a dynamic multistakeholder process that is designed to capture the latest scientific understanding of privacy, analytical methods, available safeguards, community and social norms, and best practices for research ethics as they evolve over time.

The full paper is available for download through the Washington and Lee Law Review Online as part of a collection of papers featured at the Future of Privacy Forum workshop Beyond IRBs: Designing Ethical Review Processes for Big Data Research held on December 10, 2015, in Washington, DC….(More)”

The Function of—and Need for—Institutional Review Boards


Review by  of The Censor’s Hand: The Misregulation of Human-Subject Research (Carl E. Schneider, The MIT Press): “Scientific research can be a laborious and frustrating process even before it gets started—especially when it involves living human subjects. Universities and other research institutions maintain Institutional Review Boards that scrutinize research proposals and their methodologies, consent and privacy procedures, and so on. Similarly intensive reviews are required when the intention is to use human tissue—if, say, tissue from diagnostic cancer biopsies could potentially be used to gauge the prevalence of some other illness across the population. These procedures can generate absurdities. A doctor who wanted to know which television characters children recognized, for example, was advised to seek ethics committee approval, and told that he needed to do a pilot study as a precursor.

Today’s IRB system is the response to a historic problem: academic researchers’ tendency to behave abominably when left unmonitored. Nazi medical and pseudomedical experiments provide an obvious and well-known reference, but such horrors are not found only in totalitarian regimes. The Tuskegee syphilis study, for example, deliberately left black men untreated over the course of decades so researchers could study the natural course of the disease. On a much smaller but equally disturbing scale is the case of Dan Markingson, a 26-year-old University of Michigan graduate. Suffering from psychotic illness, Markingson was coercively enrolled in a study of antipsychotics to which he could not consent, and concerns about his deteriorating condition were ignored. In 2004, he was found dead, having almost decapitated himself with a box cutter.

Many thoughtful ethicists are aware of the imperfections of IRBs. They have worried publicly for some time that the IRB system, or parts of it, may claim an authority with which even many bioethicists are uncomfortable, and hinder science for no particularly good reason. Does the system need re-tuning, a total re-build, or something even more drastic?

When it comes to IRBs, Carl E. Schneider, a professor of law and internal medicine at the University of Michigan, belongs to the abolitionist camp. In The Censor’s Hand: The Misregulation of Human-Subject Research, he presents the case against the IRB system plainly. It is a case that rests on seven related charges.

IRBs, Schneider posits, cannot be shown to do good, with regulators able to produce “no direct evidence that IRBs prevented harm”; that an IRB at least went through the motions of reviewing the trial in which Markingson died might be cited as evidence of this. On top of that, he claims, IRBs sometimes cause harm, at least insofar as they slow down medical innovation. They are built to err on the side of caution, since “research on humans” can cover a vast range of activities and disciplines, and they struggle to take this range into proper account. Correspondingly, they “lack a legible and convincing ethics”; the autonomy of IRBs means that they come to different decisions on identical cases. (In one case, an IRB thought that providing supplemental vitamin A in a study was so dangerous that it should not be allowed; another thought that withholding it in the same study was so dangerous that it should not be allowed.) IRBs have unrealistically high expectations of their members, who are often fairly ad hoc groupings with no obvious relevant expertise. They overemphasize informed consent, with the unintended consequence that cramming every possible eventuality into a consent form makes it utterly incomprehensible. Finally, Schneider argues, IRBs corrode free expression by restricting what researchers can do and how they can do it….(More)”

Revolutionizing Innovation: Users, Communities, and Open Innovation


Book edited by Dietmar Harhoff and Karim R. Lakhani: “The last two decades have witnessed an extraordinary growth of new models of managing and organizing the innovation process that emphasizes users over producers. Large parts of the knowledge economy now routinely rely on users, communities, and open innovation approaches to solve important technological and organizational problems. This view of innovation, pioneered by the economist Eric von Hippel, counters the dominant paradigm, which cast the profit-seeking incentives of firms as the main driver of technical change. In a series of influential writings, von Hippel and colleagues found empirical evidence that flatly contradicted the producer-centered model of innovation. Since then, the study of user-driven innovation has continued and expanded, with further empirical exploration of a distributed model of innovation that includes communities and platforms in a variety of contexts and with the development of theory to explain the economic underpinnings of this still emerging paradigm. This volume provides a comprehensive and multidisciplinary view of the field of user and open innovation, reflecting advances in the field over the last several decades.

The contributors—including many colleagues of Eric von Hippel—offer both theoretical and empirical perspectives from such diverse fields as economics, the history of science and technology, law, management, and policy. The empirical contexts for their studies range from household goods to financial services. After discussing the fundamentals of user innovation, the contributors cover communities and innovation; legal aspects of user and community innovation; new roles for user innovators; user interactions with firms; and user innovation in practice, describing experiments, toolkits, and crowdsourcing, and crowdfunding…(More)”

Research and Evaluation of Participatory Budgeting in the U.S. and Canada


Public Agenda: “Communities across the country are experimenting with participatory budgeting (PB), a democratic process in which residents decide together how to spend part of a public budget. Learning more about how these community efforts are implemented and with what results will help improve and expand successful forms of participatory budgeting across the U.S. and Canada.

Public Agenda is supporting local evaluation efforts and sharing research on participatory budgeting. Specifically, we are:

  • Building a community of practice among PB evaluators and researchers.
  • Working with evaluators and researchers to make data and research findings comparable across communities that use participatory budgeting.
  • Developing key metrics and research tools to help evaluate participatory budgeting (download these documents here).
  • Publishing a “Year in Participatory Budgeting Research” review based on data, findings, experiences and challenges from sites in the U.S. and Canada.
  • Conducting original, independent research on elected officials’ views of and experiences with participatory budgeting.
  • Convening the North American Participatory Budgeting Research Board.

…Below, you will find evaluation tools and resources we developed in close collaboration with PB evaluators and researchers in the U.S. and Canada. We also included the local evaluation reports from communities around the U.S. and Canada using PB in budget decisions.

To be the first to hear about new PB resources and news, join our email list. We also invite you to email us to join our listserv and participate in discussion about evaluation and research of participatory budgeting in the U.S. and Canada.

New to PB and looking to introduce it to your community? You should start here instead! Once your PB effort is under way, come back to this page for tools to evaluate how you’re doing.

15 Key Metrics for Evaluating Participatory Budgeting: A Toolkit for Evaluators and Implementers

Evaluation is a critical component of any PB effort. Systematic and formal evaluation can help people who introduce, implement, participate in or otherwise have a stake in PB understand how participatory budgeting is growing, what its reach is, and how it’s impacting the community and beyond.

We developed the 15 Key Metrics for Evaluating Participatory Budgeting toolkit for people interested in evaluating PB efforts in their communities. It is meant to encourage and support some common research goals across PB sites and meaningfully inform local and national discussions about PB in the U.S. and Canada. It is the first iteration of such a toolkit and especially focused on providing practical and realistic guidance for the evaluation of new and relatively new PB processes.

Anyone involved in public engagement or participation efforts other than participatory budgeting may also be interested in reviewing the toolkit for research and evaluation ideas.

The toolkit requires registration before you can download.

The toolkit includes the following sections:

15 Key Metrics for Evaluating Participatory Budgeting: 15 indicators (“metrics”) that capture important elements of each community-based PB process and the PB movement in North America overall. Click here for a brief description of these metrics….(More)”

Design for policy and public services


The Centre for Public Impact: “Traditional approaches to policymaking have left policymakers and citizens looking for alternative solutions. Despite the best of intentions, the standard model of dispassionate expert analysis and subsequent implementation by a professional bureaucracy has, generally, led to siloed solutions and outcomes for citizens that fall short of what might be possible.

The discipline of design may well provide an answer to this problem by offering a collection of methods which allow civil servants to generate insights based on citizens’ needs, aspirations and behaviours. In doing so, it changes the view of citizens from seeing them as anonymous entities to complex humans with complex needs to match. The potential of this new approach is already becoming clear – just ask the medical teams and patients at Norway’s Oslo University Hospital. Women with a heightened risk of developing breast cancer had previously been forced to wait up to three months before receiving an appointment for examination and diagnosis. A redesign reduced this wait to just three days.

In-depth user research identified the principal issues and pinpointed the lack of information about the referral process as a critical problem. The designers also interviewed 40 hospital employees of all levels to find out about their daily schedules and processes. Governments have always drawn inspiration from fields such as sociology and economics. Design methods are not (yet) part of the policymaking canon, but such examples help explain why this may be about to change….(More)”Screen Shot 2016-03-07 at 8.52.52 AM

Citizen Science and the Flint Water Crisis


The Wilson Center’s Commons Lab: “In April 2014, the city of Flint, Michigan decided to switch its water supply source from the Detroit water system to a cheaper alternative, the Flint River. But in exchange for the cheaper price tag, the Flint residents paid a greater price with one of the worst public health crises of the past decade.

Despite concerns from Flint citizens about the quality of the water, the Michigan Department of Environmental Quality repeatedly attributed the problem to the plumbing system. It was 37-year-old mother of four, LeeAnne Walters who, after noticing physical and behavioral changes in her children and herself, set off a chain of events that exposed the national scandal. Eventually, with the support of Dr. Marc Edwards, an environmental engineering professor at Virginia Tech (VT), Walters discovered lead concentration levels of 13,200 parts per billion in her water, 880 times the maximum concentration allowed by law and more than twice the level the Environmental Protection Agency considers to be hazardous waste.

Citizen science emerged as an important piece of combating the Flint water crisis. Alarmed by the government’s neglect and the health issues spreading all across Flint, Edwards and Walters began the Flint Water Study, a collaboration between the Flint residents and research team from VT. Using citizen science, the VT researchers provided the Flint residents with kits to sample and test their homes’ drinking water and then analyzed the results to unearth the truth behind Flint’s water quality.

The citizen-driven project illustrates the capacity for nonprofessional scientists to use science in order to address problems that directly affect themselves and their community. While the VT team needed the Flint residents to provide water samples, the Flint residents in turn needed the VT team to conduct the analysis. In short, both parties achieved mutually beneficial results and the partnership helped expose the scandal. Surprisingly, the “traditional” problems associated with citizen science, including the inability to mobilize the local constituent base and the lack of collaboration between citizens and professional scientists, were not the obstacles in Flint….(More)”

Privacy as a Public Good


Joshua A.T. Fairfield & Christoph Engel in Duke Law Journal: “Privacy is commonly studied as a private good: my personal data is mine to protect and control, and yours is yours. This conception of privacy misses an important component of the policy problem. An individual who is careless with data exposes not only extensive information about herself, but about others as well. The negative externalities imposed on nonconsenting outsiders by such carelessness can be productively studied in terms of welfare economics. If all relevant individuals maximize private benefit, and expect all other relevant individuals to do the same, neoclassical economic theory predicts that society will achieve a suboptimal level of privacy. This prediction holds even if all individuals cherish privacy with the same intensity. As the theoretical literature would have it, the struggle for privacy is destined to become a tragedy.

But according to the experimental public-goods literature, there is hope. Like in real life, people in experiments cooperate in groups at rates well above those predicted by neoclassical theory. Groups can be aided in their struggle to produce public goods by institutions, such as communication, framing, or sanction. With these institutions, communities can manage public goods without heavy-handed government intervention. Legal scholarship has not fully engaged this problem in these terms. In this Article, we explain why privacy has aspects of a public good, and we draw lessons from both the theoretical and the empirical literature on public goods to inform the policy discourse on privacy…(More)”

See also:

Privacy, Public Goods, and the Tragedy of the Trust Commons: A Response to Professors Fairfield and Engel, Dennis D. Hirsch

Response to Privacy as a Public Good, Priscilla M. Regan

Encryption and Evolving Technology: Implications for U.S. Law Enforcement Investigations


Kristin Finklea at the Congressional Research Service: “Because modern-day criminals are constantly developing new tools and techniques to facilitate their illicit activities, law enforcement is challenged with leveraging its tools and authorities to keep pace. For instance, interconnectivity and technological innovation have not only fostered international business and communication, they have also helped criminals carry out their operations. At times, these same technological advances have presented unique hurdles for law enforcement and officials charged with combating malicious actors.

Technology as a barrier for law enforcement is by no means a new issue in U.S. policing. In the 1990s, for instance, there were concerns about digital and wireless communications potentially hampering law enforcement in carrying out court-authorized surveillance. To help combat these challenges, Congress passed the Communications Assistance for Law Enforcement Act (CALEA; P.L. 103-414), which among other things, required telecommunications carriers to assist law enforcement in executing authorized electronic surveillance.

The technology boundary has received renewed attention as companies have implemented advanced security for their products—particularly their mobile devices. In some cases, enhanced encryption measures have been put in place resulting in the fact that companies such as Apple and Google cannot unlock devices for anyone under any circumstances, not even law enforcement.

Law enforcement has concerns over certain technological changes, and there are fears that officials may be unable to keep pace with technological advances and conduct electronic surveillance if they cannot access certain information. Originally, the going dark debate centered on law enforcement’s ability to intercept real-time communications. More recent technology changes have potentially impacted law enforcement capabilities to access not only communications, but stored data as well….(More)”