LAPD moving away data-driven crime programs over potential racial bias


Mark Puente in The Los Angeles Times: “The Los Angeles Police Department pioneered the controversial use of data to pinpoint crime hot spots and track violent offenders.

Complex algorithms and vast databases were supposed to revolutionize crime fighting, making policing more efficient as number-crunching computers helped to position scarce resources.

But critics long complained about inherent bias in the data — gathered by officers — that underpinned the tools.

They claimed a partial victory when LAPD Chief Michel Moore announced he would end one highly touted program intended to identify and monitor violent criminals. On Tuesday, the department’s civilian oversight panel raised questions about whether another program, aimed at reducing property crime, also disproportionately targets black and Latino communities.

Members of the Police Commission demanded more information about how the agency plans to overhaul a data program that helps predict where and when crimes will likely occur. One questioned why the program couldn’t be suspended.

“There is very limited information” on the program’s impact, Commissioner Shane Murphy Goldsmith said.

The action came as so-called predictive policing— using search tools, point scores and other methods — is under increasing scrutiny by privacy and civil liberties groups that say the tactics result in heavier policing of black and Latino communities. The argument was underscored at Tuesday’s commission meeting when several UCLA academics cast doubt on the research behind crime modeling and predictive policing….(More)”.

Introducing the Contractual Wheel of Data Collaboration


Blog by Andrew Young and Stefaan Verhulst: “Earlier this year we launched the Contracts for Data Collaboration (C4DC) initiative — an open collaborative with charter members from The GovLab, UN SDSN Thematic Research Network on Data and Statistics (TReNDS), University of Washington and the World Economic Forum. C4DC seeks to address the inefficiencies of developing contractual agreements for public-private data collaboration by informing and guiding those seeking to establish a data collaborative by developing and making available a shared repository of relevant contractual clauses taken from existing legal agreements. Today TReNDS published “Partnerships Founded on Trust,” a brief capturing some initial findings from the C4DC initiative.

The Contractual Wheel of Data Collaboration [beta]

The Contractual Wheel of Data Collaboration [beta] — Stefaan G. Verhulst and Andrew Young, The GovLab

As part of the C4DC effort, and to support Data Stewards in the private sector and decision-makers in the public and civil sectors seeking to establish Data Collaboratives, The GovLab developed the Contractual Wheel of Data Collaboration [beta]. The Wheel seeks to capture key elements involved in data collaboration while demystifying contracts and moving beyond the type of legalese that can create confusion and barriers to experimentation.

The Wheel was developed based on an assessment of existing legal agreements, engagement with The GovLab-facilitated Data Stewards Network, and analysis of the key elements of our Data Collaboratives Methodology. It features 22 legal considerations organized across 6 operational categories that can act as a checklist for the development of a legal agreement between parties participating in a Data Collaborative:…(More)”.

San Francisco teams up with Uber, location tracker on 911 call responses


Gwendolyn Wu at San Francisco Chronicle: “In an effort to shorten emergency response times in San Francisco, the city announced on Monday that it is now using location data from RapidSOS, a New York-based public safety tech company, and ride-hailing company Uber to improve location coordinates generated from 911 calls.

An increasing amount of emergency calls are made from cell phones, said Michelle Cahn, RapidSOS’s director of community engagement. The new technology should allow emergency responders to narrow down the location of such callers and replace existing 911 technology that was built for landlines and tied to home addresses.

Cell phone location data currently given to dispatchers when they receive a 911 call can be vague, especially if the person can’t articulate their exact location, according to the Department of Emergency Management.

But if a dispatcher can narrow down where the emergency is happening, that increases the chance of a timely response and better result, Cahn said.

“It doesn’t matter what’s going on with the emergency if we don’t know where it is,” she said.

RapidSOS shares its location data — collected by Apple and Google for their in-house map apps — free of charge to public safety agencies. San Francisco’s 911 call center adopted the data service in September 2018.

The Federal Communications Commission estimates agencies could save as many as 10,000 lives a year if they shave a minute off response times. Federal officials issued new rules to improve wireless 911 calls in 2015, asking mobile carriers to provide more accurate locations to call centers. Carriers are required to find a way to triangulate the caller’s location within 50 meters — a much smaller radius than the eight blocks city officials were initially presented in October when the caller dialed 911…(More)”.

Characterizing the Biomedical Data-Sharing Landscape


Paper by Angela G. Villanueva et al: “Advances in technologies and biomedical informatics have expanded capacity to generate and share biomedical data. With a lens on genomic data, we present a typology characterizing the data-sharing landscape in biomedical research to advance understanding of the key stakeholders and existing data-sharing practices. The typology highlights the diversity of data-sharing efforts and facilitators and reveals how novel data-sharing efforts are challenging existing norms regarding the role of individuals whom the data describe.

Technologies such as next-generation sequencing have dramatically expanded capacity to generate genomic data at a reasonable cost, while advances in biomedical informatics have created new tools for linking and analyzing diverse data types from multiple sources. Further, many research-funding agencies now mandate that grantees share data. The National Institutes of Health’s (NIH) Genomic Data Sharing (GDS) Policy, for example, requires NIH-funded research projects generating large-scale human genomic data to share those data via an NIH-designated data repository such as the Database of Geno-types and Phenotypes (dbGaP). Another example is the Parent Project Muscular Dystrophy, a non-profit organization that requires applicants to propose a data-sharing plan and take into account an applicant’s history of data sharing.

The flow of data to and from different projects, institutions, and sectors is creating a medical information commons (MIC), a data-sharing ecosystem consisting of networked resources sharing diverse health-related data from multiple sources for research and clinical uses. This concept aligns with the 2018 NIH Strategic Plan for Data Science, which uses the term “data ecosystem” to describe “a distributed, adaptive, open system with properties of self-organization, scalability and sustainability” and proposes to “modernize the biomedical research data ecosystem” by funding projects such as the NIH Data Commons. Consistent with Elinor Ostrom’s discussion of nested institutional arrangements, an MIC is both singular and plural and may describe the ecosystem as a whole or individual components contributing to the ecosystem. Thus, resources like the NIH Data Commons with its associated institutional arrangements are MICs, and also form part of the larger MIC that encompasses all such resources and arrangements.

Although many research funders incentivize data sharing, in practice, progress in making biomedical data broadly available to maximize its utility is often hampered by a broad range of technical, legal, cultural, normative, and policy challenges that include achieving interoperability, changing the standards for academic promotion, and addressing data privacy and security concerns. Addressing these challenges requires multi-stakeholder involvement. To identify relevant stakeholders and advance understanding of the contributors to an MIC, we conducted a landscape analysis of existing data-sharing efforts and facilitators. Our work builds on typologies describing various aspects of data sharing that focused on biobanks, research consortia, or where data reside (e.g., degree of data centralization).7 While these works are informative, we aimed to capture the biomedical data-sharing ecosystem with a wider scope. Understanding the components of an MIC ecosystem and how they interact, and identifying emerging trends that test existing norms (such as norms respecting the role of individuals from whom the data describe), is essential to fostering effective practices, policies and governance structures, guiding resource allocation, and promoting the overall sustainability of the MIC….(More)”

How Recommendation Algorithms Run the World


Article by Zeynep Tufekci: “What should you watch? What should you read? What’s news? What’s trending? Wherever you go online, companies have come up with very particular, imperfect ways of answering these questions. Everywhere you look, recommendation engines offer striking examples of how values and judgments become embedded in algorithms and how algorithms can be gamed by strategic actors.

Consider a common, seemingly straightforward method of making suggestions: a recommendation based on what people “like you” have read, watched, or shopped for. What exactly is a person like me? Which dimension of me? Is it someone of the same age, gender, race, or location? Do they share my interests? My eye color? My height? Or is their resemblance to me determined by a whole mess of “big data” (aka surveillance) crunched by a machine-learning algorithm?

Deep down, behind every “people like you” recommendation is a computational method for distilling stereotypes through data. Even when these methods work, they can help entrench the stereotypes they’re mobilizing. They might easily recommend books about coding to boys and books about fashion to girls, simply by tracking the next most likely click. Of course, that creates a feedback cycle: If you keep being shown coding books, you’re probably more likely to eventually check one out.

Another common method for generating recommendations is to extrapolate from patterns in how people consume things. People who watched this then watched that; shoppers who purchased this item also added that one to their shopping cart. Amazon uses this method a lot, and I admit, it’s often quite useful. Buy an electric toothbrush? How nice that the correct replacement head appears in your recommendations. Congratulations on your new vacuum cleaner: Here are some bags that fit your machine.

But these recommendations can also be revealing in ways that are creepy. …

One final method for generating recommendations is to identify what’s “trending” and push that to a broader user base. But this, too, involves making a lot of judgments….(More)”.

Leveraging Big Data for Social Responsibility


Paper by Cynthia Ann Peterson: “Big data has the potential to revolutionize the way social risks are managed by providing enhanced insight to enable more informed actions to be taken. The objective of this paper is to share the approach taken by PETRONAS to leverage big data to enhance its social performance practice, specifically in social risk assessments and grievance mechanism.

The paper will deliberate on the benefits, challenges and opportunities to improve the management of social risk through analytics, and how PETRONAS has taken those factors into consideration in the enhancement of its social risk assessment and grievance mechanism tools. Key considerations such as disaggregation of data, the appropriate leading and lagging indicators and having a human rights lens to data will also be discussed.

Leveraging on big data is still in its early stages in the social risk space, similar with other areas in the oil and gas industry according to research by Wood Mackenzie. Even so, there are several concerns which include; the aggregation of data may result in risks to minority or vulnerable groups not getting surfaced; privacy breaches which violate human rights and potential discrimination due to prescriptive analysis, such as on a community’s propensity to pose certain social risks to projects or operations. Certainly, there are many challenges ahead which need to be considered, including how best to take a human rights approach to using big data.

Nevertheless, harnessing the power of big data will help social risk practitioners turn a high volume of disparate pieces of raw data from grievance mechanisms and social risk assessments into information that can be used to avoid or mitigate risks now and in the future through predictive technology. Consumer and other industries are benefiting from this leverage now, and social performance practitioners in the oil and gas industry can emulate these proven models….(More)”.

The Importance of Data Access Regimes for Artificial Intelligence and Machine Learning


JRC Digital Economy Working Paper by Bertin Martens: “Digitization triggered a steep drop in the cost of information. The resulting data glut created a bottleneck because human cognitive capacity is unable to cope with large amounts of information. Artificial intelligence and machine learning (AI/ML) triggered a similar drop in the cost of machine-based decision-making and helps in overcoming this bottleneck. Substantial change in the relative price of resources puts pressure on ownership and access rights to these resources. This explains pressure on access rights to data. ML thrives on access to big and varied datasets. We discuss the implications of access regimes for the development of AI in its current form of ML. The economic characteristics of data (non-rivalry, economies of scale and scope) favour data aggregation in big datasets. Non-rivalry implies the need for exclusive rights in order to incentivise data production when it is costly. The balance between access and exclusion is at the centre of the debate on data regimes. We explore the economic implications of several modalities for access to data, ranging from exclusive monopolistic control to monopolistic competition and free access. Regulatory intervention may push the market beyond voluntary exchanges, either towards more openness or reduced access. This may generate private costs for firms and individuals. Society can choose to do so if the social benefits of this intervention outweigh the private costs.

We briefly discuss the main EU legal instruments that are relevant for data access and ownership, including the General Data Protection Regulation (GDPR) that defines the rights of data subjects with respect to their personal data and the Database Directive (DBD) that grants ownership rights to database producers. These two instruments leave a wide legal no-man’s land where data access is ruled by bilateral contracts and Technical Protection Measures that give exclusive control to de facto data holders, and by market forces that drive access, trade and pricing of data. The absence of exclusive rights might facilitate data sharing and access or it may result in a segmented data landscape where data aggregation for ML purposes is hard to achieve. It is unclear if incompletely specified ownership and access rights maximize the welfare of society and facilitate the development of AI/ML…(More)”

Data Trusts: More Data than Trust? The Perspective of the Data Subject in the Face of a Growing Problem


Paper by Christine Rinik: “In the recent report, Growing the Artificial Intelligence Industry in the UK, Hall and Pesenti suggest the use of a ‘data trust’ to facilitate data sharing. Whilst government and corporations are focusing on their need to facilitate data sharing, the perspective of many individuals is that too much data is being shared. The issue is not only about data, but about power. The individual does not often have a voice when issues relating to data sharing are tackled. Regulators can cite the ‘public interest’ when data governance is discussed, but the individual’s interests may diverge from that of the public.

This paper considers the data subject’s position with respect to data collection leading to considerations about surveillance and datafication. Proposals for data trusts will be considered applying principles of English trust law to possibly mitigate the imbalance of power between large data users and individual data subjects. Finally, the possibility of a workable remedy in the form of a class action lawsuit which could give the data subjects some collective power in the event of a data breach will be explored. Despite regulatory efforts to protect personal data, there is a lack of public trust in the current data sharing system….(More)”.

Illuminating Big Data will leave governments in the dark


Robin Wigglesworth in the Financial Times: “Imagine a world where interminable waits for backward-looking, frequently-revised economic data seem as archaically quaint as floppy disks, beepers and a civil internet. This fantasy realm may be closer than you think.

The Bureau of Economic Analysis will soon publish its preliminary estimate for US economic growth in the first three months of the year, finally catching up on its regular schedule after a government shutdown paralysed the agency. But other data are still delayed, and the final official result for US gross domestic product won’t be available until July. Along the way there are likely to be many tweaks.

Collecting timely and accurate data are a Herculean task, especially for an economy as vast and varied as the US’s. But last week’s World Bank-International Monetary Fund’s annual spring meetings offered some clues on a brighter, more digital future for economic data.

The IMF hosted a series of seminars and discussions exploring how the hot new world of Big Data could be harnessed to produce more timely economic figures — and improve economic forecasts. Jiaxiong Yao, an IMF official in its African department, explained how it could use satellites to measure the intensity of night-time lights, and derive a real-time gauge of economic health.

“If a country gets brighter over time, it is growing. If it is getting darker then it probably needs an IMF programme,” he noted. Further sessions explored how the IMF could use machine learning — a popular field of artificial intelligence — to improve its influential but often faulty economic forecasts; and real-time shipping data to map global trade flows.

Sophisticated hedge funds have been mining some of these new “alternative” data sets for some time, but statistical agencies, central banks and multinational organisations such as the IMF and the World Bank are also starting to embrace the potential.

The amount of digital data around the world is already unimaginably vast. As more of our social and economic activity migrates online, the quantity and quality is going to increase exponentially. The potential is mind-boggling. Setting aside the obvious and thorny privacy issues, it is likely to lead to a revolution in the world of economic statistics. …

Yet the biggest issues are not the weaknesses of these new data sets — all statistics have inherent flaws — but their nature and location.

Firstly, it depends on the lax regulatory and personal attitudes towards personal data continuing, and there are signs of a (healthy) backlash brewing.

Secondly, almost all of this alternative data is being generated and stored in the private sector, not by government bodies such as the Bureau of Economic Analysis, Eurostat or the UK’s Office for National Statistics.

Public bodies are generally too poorly funded to buy or clean all this data themselves, meaning hedge funds will benefit from better economic data than the broader public. We might, in fact, need legislation mandating that statistical agencies receive free access to any aggregated private sector data sets that might be useful to their work.

That would ensure that our economic officials and policymakers don’t fly blind in an increasingly illuminated world….(More)”.

Data Collaboratives as an enabling infrastructure for AI for Good


Blog Post by Stefaan G. Verhulst: “…The value of data collaboratives stems from the fact that the supply of and demand for data are generally widely dispersed — spread across government, the private sector, and civil society — and often poorly matched. This failure (a form of “market failure”) results in tremendous inefficiencies and lost potential. Much data that is released is never used. And much data that is actually needed is never made accessible to those who could productively put it to use.

Data collaboratives, when designed responsibly, are the key to addressing this shortcoming. They draw together otherwise siloed data and a dispersed range of expertise, helping match supply and demand, and ensuring that the correct institutions and individuals are using and analyzing data in ways that maximize the possibility of new, innovative social solutions.

Roadmap for Data Collaboratives

Despite their clear potential, the evidence base for data collaboratives is thin. There’s an absence of a systemic, structured framework that can be replicated across projects and geographies, and there’s a lack of clear understanding about what works, what doesn’t, and how best to maximize the potential of data collaboratives.

At the GovLab, we’ve been working to address these shortcomings. For emerging economies considering the use of data collaboratives, whether in pursuit of Artificial Intelligence or other solutions, we present six steps that can be considered in order to create data collaborative that are more systematic, sustainable, and responsible.

The need for making Data Collaboratives Systematic, Sustainable and Responsible
  • Increase Evidence and Awareness
  • Increase Readiness and Capacity
  • Address Data Supply and Demand Inefficiencies and Uncertainties
  • Establish a New “Data Stewards” Function
  • Develop and strengthen policies and governance practices for data collaboration