Stefaan Verhulst
Sarah Telford and Stefaan G. Verhulst at Understanding Risk Forum: “….In creating the policy, OCHA partnered with the NYU Governance Lab (GovLab) and Leiden University to understand the policy and privacy landscape, best practices of partner organizations, and how to assess the data it manages in terms of potential harm to people.
We seek to share our findings with the UR community to get feedback and start a conversation around the risk to using certain types of data in humanitarian and development efforts and when understanding risk.
What is High-Risk Data?
High-risk data is generally understood as data that includes attributes about individuals. This is commonly referred to as PII or personally identifiable information. Data can also create risk when it identifies communities or demographics within a group and ties them to a place (i.e., women of a certain age group in a specific location). The risk comes when this type of data is collected and shared without proper authorization from the individual or the organization acting as the data steward; or when the data is being used for purposes other than what was initially stated during collection.
The potential harms of inappropriately collecting, storing or sharing personal data can affect individuals and communities that may feel exploited or vulnerable as the result of how data is used. This became apparent during the Ebola outbreak of 2014, when a number of data projects were implemented without appropriate risk management measures. One notable example was the collection and use of aggregated call data records (CDRs) to monitor the spread of Ebola, which not only had limited success in controlling the virus, but also compromised the personal information of those in Ebola-affected countries. (See Ebola: A Big Data Disaster).
A Data-Risk Framework
Regardless of an organization’s data requirements, it is useful to think through the potential risks and harms for its collection, storage and use. Together with the Harvard Humanitarian Initiative, we have set up a four-step data risk process that includes doing an assessment and inventory, understanding risks and harms, and taking measures to counter them.
- Assessment – The first step is to understand the context within which the data is being generated and shared. The key questions to ask include: What is the anticipated benefit of using the data? Who has access to the data? What constitutes the actionable information for a potential perpetrator? What could set off the threat to the data being used inappropriately?
- Data Inventory – The second step is to take inventory of the data and how it is being stored. Key questions include: Where is the data – is it stored locally or hosted by a third party? Where could the data be housed later? Who might gain access to the data in the future? How will we know – is data access being monitored?
- Risks and Harms – The next step is to identify potential ways in which risk might materialize. Thinking through various risk-producing scenarios will help prepare staff for incidents. Examples of risks include: your organization’s data being correlated with other data sources to expose individuals; your organization’s raw data being publicly released; and/or your organization’s data system being maliciously breached.
- Counter-Measures – The next step is to determine what measures would prevent risk from materializing. Methods and tools include developing data handling policies, implementing access controls to the data, and training staff on how to use data responsibly….(More)
Latest White House report on Big Data charts pathways for fairness and opportunity but also cautions against re-encoding bias and discrimination into algorithmic systems: ” Advertisements tailored to reflect previous purchasing decisions; targeted job postings based on your degree and social networks; reams of data informing predictions around college admissions and financial aid. Need a loan? There’s an app for that.
As technology advances and our economic, social, and civic lives become increasingly digital, we are faced with ethical questions of great consequence. Big data and associated technologies create enormous new opportunities to revisit assumptions and instead make data-driven decisions. Properly harnessed, big data can be a tool for overcoming longstanding bias and rooting out discrimination.
The era of big data is also full of risk. The algorithmic systems that turn data into information are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them. Predictors of success can become barriers to entry; careful marketing can be rooted in stereotype. Without deliberate care, these innovations can easily hardwire discrimination, reinforce bias, and mask opportunity.
Because technological innovation presents both great opportunity and great risk, the White House has released several reports on “big data” intended to prompt conversation and advance these important issues. The topics of previous reports on data analytics included privacy, prices in the marketplace, and consumer protection laws. Today, we are announcing the latest report on big data, one centered on algorithmic systems, opportunity, and civil rights.
The first big data report warned of “the potential of encoding discrimination in automated decisions”—that is, discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.” A commitment to understanding these risks and harnessing technology for good prompted us to specifically examine the intersection between big data and civil rights.
Using case studies on credit lending, employment, higher education, and criminal justice, the report we are releasing today illustrates how big data techniques can be used to detect bias and prevent discrimination. It also demonstrates the risks involved, particularly how technologies can deliberately or inadvertently perpetuate, exacerbate, or mask discrimination.
The purpose of the report is not to offer remedies to the issues it raises, but rather to identify these issues and prompt conversation, research—and action—among technologists, academics, policy makers, and citizens, alike.
The report includes a number of recommendations for advancing work in this nascent field of data and ethics. These include investing in research, broadening and diversifying technical leadership, cross-training, and expanded literacy on data discrimination, bolstering accountability, and creating standards for use within both the government and the private sector. It also calls on computer and data science programs and professionals to promote fairness and opportunity as part of an overall commitment to the responsible and ethical use of data.
Big data is here to stay; the question is how it will be used: to advance civil rights and opportunity, or to undermine them….(More)”
Andrew Russell & Lee Vinsel at AEON: “The trajectory of ‘innovation’ from core, valued practice to slogan of dystopian societies, is not entirely surprising, at a certain level. There is a formulaic feel: a term gains popularity because it resonates with the zeitgeist, reaches buzzword status, then suffers from overexposure and cooptation. Right now, the formula has brought society to a question: after ‘innovation’ has been exposed as hucksterism, is there a better way to characterise relationships between society and technology?
There are three basic ways to answer that question. First, it is crucial to understand that technology is not innovation. Innovation is only a small piece of what happens with technology. This preoccupation with novelty is unfortunate because it fails to account for technologies in widespread use, and it obscures how many of the things around us are quite old. In his book, Shock of the Old (2007), the historian David Edgerton examines technology-in-use. He finds that common objects, like the electric fan and many parts of the automobile, have been virtually unchanged for a century or more. When we take this broader perspective, we can tell different stories with drastically different geographical, chronological, and sociological emphases. The stalest innovation stories focus on well-to-do white guys sitting in garages in a small region of California, but human beings in the Global South live with technologies too. Which ones? Where do they come from? How are they produced, used, repaired? Yes, novel objects preoccupy the privileged, and can generate huge profits. But the most remarkable tales of cunning, effort, and care that people direct toward technologies exist far beyond the same old anecdotes about invention and innovation.
Second, by dropping innovation, we can recognise the essential role of basic infrastructures. ‘Infrastructure’ is a most unglamorous term, the type of word that would have vanished from our lexicon long ago if it didn’t point to something of immense social importance. Remarkably, in 2015 ‘infrastructure’ came to the fore of conversations in many walks of American life. In the wake of a fatal Amtrak crash near Philadelphia, President Obama wrestled with Congress to pass an infrastructure bill that Republicans had been blocking, but finally approved in December 2015. ‘Infrastructure’ also became the focus of scholarly communities in history and anthropology, even appearing 78 times on the programme of the annual meeting of the American Anthropological Association. Artists, journalists, and even comedians joined the fray, most memorably with John Oliver’s hilarious sketch starring Edward Norton and Steve Buscemi in a trailer for an imaginary blockbuster on the dullest of subjects. By early 2016, the New York Review of Books brought the ‘earnest and passive word’ to the attention of its readers, with a depressing essay titled ‘A Country Breaking Down’.
Despite recurring fantasies about the end of work, the central fact of our industrial civilisation is labour, most of which falls far outside the realm of innovation
The best of these conversations about infrastructure move away from narrow technical matters to engage deeper moral implications. Infrastructure failures – train crashes, bridge failures, urban flooding, and so on – are manifestations of and allegories for America’s dysfunctional political system, its frayed social safety net, and its enduring fascination with flashy, shiny, trivial things. But, especially in some corners of the academic world, a focus on the material structures of everyday life can take a bizarre turn, as exemplified in work that grants ‘agency’ to material things or wraps commodity fetishism in the language of high cultural theory, slick marketing, and design. For example, Bloomsbury’s ‘Object Lessons’ series features biographies of and philosophical reflections on human-built things, like the golf ball. What a shame it would be if American society matured to the point where the shallowness of the innovation concept became clear, but the most prominent response was an equally superficial fascination with golf balls, refrigerators, and remote controls.
Third, focusing on infrastructure or on old, existing things rather than novel ones reminds us of the absolute centrality of the work that goes into keeping the entire world going…..
We organised a conference to bring the work of the maintainers into clearer focus. More than 40 scholars answered a call for papers asking, ‘What is at stake if we move scholarship away from innovation and toward maintenance?’ Historians, social scientists, economists, business scholars, artists, and activists responded. They all want to talk about technology outside of innovation’s shadow.
One important topic of conversation is the danger of moving too triumphantly from innovation to maintenance. There is no point in keeping the practice of hero-worship that merely changes the cast of heroes without confronting some of the deeper problems underlying the innovation obsession. One of the most significant problems is the male-dominated culture of technology, manifest in recent embarrassments such as the flagrant misogyny in the ‘#GamerGate’ row a couple of years ago, as well as the persistent pay gap between men and women doing the same work.
There is an urgent need to reckon more squarely and honestly with our machines and ourselves. Ultimately, emphasising maintenance involves moving from buzzwords to values, and from means to ends. In formal economic terms, ‘innovation’ involves the diffusion of new things and practices. The term is completely agnostic about whether these things and practices are good. Crack cocaine, for example, was a highly innovative product in the 1980s, which involved a great deal of entrepreneurship (called ‘dealing’) and generated lots of revenue. Innovation! Entrepreneurship! Perhaps this point is cynical, but it draws our attention to a perverse reality: contemporary discourse treats innovation as a positive value in itself, when it is not.
Entire societies have come to talk about innovation as if it were an inherently desirable value, like love, fraternity, courage, beauty, dignity, or responsibility. Innovation-speak worships at the altar of change, but it rarely asks who benefits, to what end? A focus on maintenance provides opportunities to ask questions about what we really want out of technologies. What do we really care about? What kind of society do we want to live in? Will this help get us there? We must shift from means, including the technologies that underpin our everyday actions, to ends, including the many kinds of social beneficence and improvement that technology can offer. Our increasingly unequal and fearful world would be grateful….(More)”
Tanja Aitamurto and Hélène Landemore in Policy & Internet: “This article examines the emergence of democratic deliberation in a crowdsourced law reform process. The empirical context of the study is a crowdsourced legislative reform in Finland, initiated by the Finnish government. The findings suggest that online exchanges in the crowdsourced process qualify as democratic deliberation according to the classical definition. We introduce the term “crowdsourced deliberation” to mean an open, asynchronous, depersonalized, and distributed kind of online deliberation occurring among self-selected participants in the context of an attempt by government or another organization to open up the policymaking or lawmaking process. The article helps to characterize the nature of crowdsourced policymaking and to understand its possibilities as a practice for implementing open government principles. We aim to make a contribution to the literature on crowdsourcing in policymaking, participatory and deliberative democracy and, specifically, the newly emerging subfield in deliberative democracy that focuses on “deliberative systems.”…(More)”
Stephen Davenport at OGP Blog: “Government reformers and development practitioners in the open government space are experiencing the heady times associated with a newly-defined agenda. The opportunity for innovation and positive change can at times feel boundless. Yet, working in a nascent field also means a relative lack of “proven” tools and solutions (to such extent as they ever exist in development).
More research on the potential for open government initiatives to improve lives is well underway. However, keeping up with the rapidly evolving landscape of ongoing research, emerging hypotheses, and high-priority knowledge gaps has been a challenge, even as investment in open government activities has accelerated. This becomes increasing important as we gather to talk progress at the OGP Africa Regional Meeting 2016(link is external) and GIFT(link is external) consultations in Cape Town next week (May 4-6) .
Who’s doing what?
To advance the state of play, a new report commissioned by the World Bank, “Open Government Impact and Outcomes: Mapping the Landscape of Ongoing Research”(link is external), categorizes and takes stock of existing research. The report represents the first output of a newly-formed consortium (link is external) that aims to generate practical, evidence-based guidance for open government stakeholders, building on and complementing the work of organizations across the academic-practitioner spectrum.
The mapping exercise led to the creation of an interactive platform (link is external) with detailed information on how to find out more about each of the research projects covered, organized by a new typology for open government interventions. The inventory is limited in scope given practical and other considerations. It includes only projects that are currently underway. It is meant to be a forward-looking overview, rather than a literature review–and are relatively large and international in nature.
Charting a course: How can the World Bank add value?
The scope for increasing the open government knowledge base remains vast. The report suggests that, given its role as a lender, convener, and a policy advisor the World Bank is well positioned to complement and support existing research in a number of ways, such as:
- Taking a demand-driven approach, focusing on specific areas where it can identify lessons for stakeholders seeking to turn open government enthusiasm into tangible results.
- Linking researchers with governments and practitioners to study specific areas of interest (in particular, access to information and social accountability interventions).
- Evaluating the impact of open government reforms against baseline data that may not be public yet, but that are accessible to the World Bank.
- Contributing to a better understanding of the role and impact of ICTs through work like the recently-published study (link is external)that examines the relationship between digital citizen engagement and government responsiveness.
- Ensuring that World Bank loans and projects are conceived as opportunities for knowledge generation, while incorporating the most relevant and up-to-date evidence on what works in different contexts.
- Leveraging its involvement in the Open Government Partnership to help stakeholders make evidence-based reform commitments….(More)
Mark Zastrow at Nature: “After a magnitude-7.8 earthquake struck Ecuador’s Pacific coast on 16 April, a new ally joined the international relief effort: a citizen-science network called Zooniverse.
On 25 April, Zooniverse launched a website that asks volunteers to analyse rapidly-snapped satellite imagery of the disaster, which led to more than 650 reported deaths and 16,000 injuries. The aim is to help relief workers on the ground to find the most heavily damaged regions and identify which roads are passable.
Several crisis-mapping programmes with thousands of volunteers already exist — but it can take days to train satellites on the damaged region and to transmit data to humanitarian organizations, and results have not always proven useful. The Ecuador quake marked the first live public test for an effort dubbed the Planetary Response Network (PRN), which promises to be both more nimble than previous efforts, and to use more rigorous machine-learning algorithms to evaluate the quality of crowd-sourced analyses.
The network relies on imagery from the satellite company Planet Labs in San Francisco, California, which uses an array of shoebox-sized satellites to map the planet. In order to speed up the crowd-sourced process, it uses the Zooniverse platform to distribute the tasks of spotting features in satellite images. Machine-learning algorithms employed by a team at the University of Oxford, UK, then classify the reliability of each volunteer’s analysis and weight their contributions accordingly.
Rapid-fire data
Within 2 hours of the Ecuador test project going live with a first set of 1,300 images, each photo had been checked at least 20 times. “It was one of the fastest responses I’ve seen,” says Brooke Simmons, an astronomer at the University of California, San Diego, who leads the image processing. Steven Reece, who heads the Oxford team’s machine-learning effort, says that results — a “heat map” of damage with possible road blockages — were ready in another two hours.
In all, more than 2,800 Zooniverse users contributed to analysing roughly 25,000 square kilometres of imagery centred around the coastal cities of Pedernales and Bahia de Caraquez. That is where the London-based relief organization Rescue Global — which requested the analysis the day after the earthquake — currently has relief teams on the ground, including search dogs and medical units….(More)”
Kendra L. Smith and Lindsey Collins at Planetizen: “Over the past decade, crowdsourcing has grown to significance through crowdfunding, crowd collaboration, crowd voting, and crowd labor. The idea behind crowdsourcing is simple: decentralize decision-making by utilizing large groups of people to assist with solving problems, generating ideas, funding, generating data, and making decisions. We have seen crowdsourcing used in both the private and public sectors. In a previous article, “Empowered Design, By ‘the Crowd,'” we discuss the significant role crowdsourcing can play in urban planning through citizen engagement.
Crowdsourcing in the public sector represents a more inclusive form of governance that incorporates a multi-stakeholder approach; it goes beyond regular forms of community engagement and allows citizens to participate in decision-making. When citizens help inform decision-making, new opportunities are created for cities—opportunities that are beginning to unfold for planners. However, despite its obvious utility, planners underutilize crowdsourcing. A key reason for its underuse can be attributed to a lack of credibility and accountability in crowdsourcing endeavors.
Crowdsourcing credibility speaks to the capacity to trust a source and discern whether information is, indeed, true. While it can be difficult to know if any information is definitively true, indicators of fact or truth include where information was collected, how information was collected, and how rigorously it was fact-checking or peer reviewed. However, in the digital universe of today, individuals can make a habit of posting inaccurate, salacious, malicious, and flat-out false information. The realities of contemporary media make it more difficult to trust crowdsourced information for decision-making, especially for the public sector, where the use of inaccurate information can impact the lives of many and the trajectory of a city. As a result, there is a need to establish accountability measures to enhance crowdsourcing in urban planning.
Establishing Accountability Measures
For urban planners considering crowdsourcing, establishing a system of accountability measures might seem like more effort than it is worth. However, that is simply not true. Recent evidence has proven traditional community engagement (e.g., town halls, forums, city council meetings) is lower than ever. Current engagement also tends to focus on problems in the community rather than the development of the community. Crowdsourcing offers new opportunities for ongoing and sustainable engagement with the community. It can be simple as well.
The following four methods can be used separately or together (we hope they are used together) to help establish accountability and credibility in the crowdsourcing process:
- Agenda setting
- Growing a crowdsourcing community
- Facilitators/subject matter experts (SME)
- Microtasking
In addition to boosting credibility, building a framework of accountability measures can help planners and crowdsourcing communities clearly define their work, engage the community, sustain community engagement, acquire help with tasks, obtain diverse opinions, and become more inclusive….(More)”
Esther Landhuis at NPR: “Though it’s the world’s top infectious killer, tuberculosis is surprisingly tricky to diagnose. Scientists think that video gamers can help them create a better diagnostic test.
An online puzzle released Monday will see whether the researchers are right. Players of a Web-based game called EteRNA will try to design a sensor molecule that could potentially make diagnosing TB as easy as taking a home pregnancy test. The TB puzzle marks the launch of “EteRNA Medicine.”
The idea of rallying gamers to fight TB arose as two young Stanford University professors chatted over dinner at a conference last May. Rhiju Das, a biochemist who helped create EteRNA, told bioinformatician Purvesh Khatri about the game, which challenges nonexperts to design RNA molecules that fold into target shapes.
RNA molecules play key roles in biology and disease. Some brain disorders can be traced to problems with RNA folding. Viruses such as H1N1 flu and HIV depend on RNA elements to replicate and infect cells.
Das wants to “fight fire with fire” — that is, to disrupt the RNA involved in a disease or virus by crafting new tools that are themselves made of RNA molecules. EteRNA players learn RNA design principles with each puzzle they solve.
Khatri was intrigued by the notion of engaging the public to solve problems. His lab develops novel diagnostics using publicly available data sets. The team had just published a paper on a set of genes that could help diagnose sepsis and had other papers under review on influenza and TB.
In an “Aha!” moment during their dinner chat, Khatri says, he and Das realized “how awesome it would be to sequentially merge our two approaches — to use public data to find a diagnostic marker for a disease, and then use the public’s help to develop the test.”
TB seemed opportune as it has a simple diagnostic signature — a set of three human genes that turn up or down predictably after TB infection. When checked across gene data on thousands of blood samples from 14 groups of people around the globe, the behavior of the three-gene set readily identified people with active TB, distinguishing them from individuals who had latent TB or other diseases.
Those findings, published in February, have gotten serious attention — not only from curious patients and doctors but also from humanitarian groups eager to help bring a better TB test to market. It can currently take several tests to tell whether a person has active TB, including a chest X-ray and sputum test. The Bill & Melinda Gates Foundation has started sending data to help the Stanford team validate a test based on the newly identified TB gene signature, says study leader Khatri, who works at the university’s Center for Biomedical Informatics Research….(More)”
Economic and Social Research Council (UK): “InFuse, an online search facility for census data, is enabling tailored search and investigation of UK census statistics – opening new opportunities for aggregating and comparing population counts.
Impacts
- InFuse data were used for the ‘Smarter Travel’ research project studying how ‘smart choices’ for sustainable travel could be implemented and supported in transport planning. The research directly influenced UK climate-change agendas and policy, including:
- the UK Committee on Climate Change recommendations on cost-effective-emission reductions
- the Scottish Government’s targets and household advice for smarter travel
- the UK Government’s Local Sustainable Transport Fund supporting 96 projects across England
- evaluations for numerous Local Authority Transport Plans across the UK.
- The Integration Hub, a web resource that was launched by Demos in 2015 to provide data about ethnic integration in England and Wales, uses data from InFuse to populate its interactive maps of the UK.
- Census data downloaded from InFuse informed the Welsh Government for policies to engage Gypsy and Traveller families in education, showing that over 60 per cent aged over 16 from these communities had no qualifications.
- Executive recruitment firm Sapphire Partners used census data from InFuse in a report on female representation on boards, revealing that 77 per cent of FTSE board members are men, and 70 per cent of new board appointments go to men.
- A study by the Marie Curie charity into the differing needs of Black, Asian and minority ethnic groups in Scotland for end-of-life care used InFuse to determine that the minority ethnic population in Scotland has doubled since 2001 from 100,000 to 200,000 – highlighting the need for greater and more appropriate provision.
- A Knowledge Transfer Partnership between homelessness charity Llamau and Cardiff University used InFuse data to show that Welsh young homeless people participating in the study were over twice as likely to have left school with no qualifications compared to UK-wide figures for their age group and gender….(More)”