Barred From Grocery Stores by Facial Recognition


Article by Adam Satariano and Kashmir Hill: “Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.

On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.

“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.

Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.

Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave…(More)”.

Using data to address equity challenges in local government


Report by the Mastercard Center for Inclusive Growth (CFIG): “…This report describes the Data for Equity cohort learning journey, case studies of how participating cities engaged with and learned from the program, and key takeaways about the potential for data to inform effective and innovative equitable development efforts. Alongside data tools, participants explored the value of qualitative data, the critical link between racial equity and economic inclusion, and how federal funds can advance ongoing equity initiatives. 

Cohort members gained and shared insights throughout their learning journey, including:

  • Resources that provided guidance on how to target funding were helpful to ensuring the viability of cities’ equity and economic development initiatives.
  • Tools and resources that helped practitioners move from diagnosing challenges to identifying solutions were especially valuable.
  • Peer-to-peer learning is an essential resource for leaders and staff working in equity roles, which are often structured differently than other city offices.
  • More data tools that explicitly measure racial equity indicators are needed…(More)”.

Opening industry data: The private sector’s role in addressing societal challenges


Paper by Jennifer Hansen and Yiu-Shing Pang: “This commentary explores the potential of private companies to advance scientific progress and solve social challenges through opening and sharing their data. Open data can accelerate scientific discoveries, foster collaboration, and promote long-term business success. However, concerns regarding data privacy and security can hinder data sharing. Companies have options to mitigate the challenges through developing data governance mechanisms, collaborating with stakeholders, communicating the benefits, and creating incentives for data sharing, among others. Ultimately, open data has immense potential to drive positive social impact and business value, and companies can explore solutions for their specific circumstances and tailor them to their specific needs…(More)”.

How data helped Mexico City reduce high-impact crime by more than 50%


Article by Alfredo Molina Ledesma: “When Claudia Sheimbaum Pardo became Mayor of Mexico City 2018, she wanted a new approach to tackling the city’s most pressing problems. Crime was at the very top of the agenda – only 7% of the city’s inhabitants considered it a safe place. New policies were needed to turn this around.

Data became a central part of the city’s new strategy. The Digital Agency for Public Innovation was created in 2019 – tasked with using data to help transform the city. To put this into action, the city administration immediately implemented an open data policy and launched their official data platform, Portal de Datos Abiertos. The policy and platform aimed to make data that Mexico City collects accessible to anyone: municipal agencies, businesses, academics, and ordinary people.

“The main objective of the open data strategy of Mexico City is to enable more people to make use of the data generated by the government in a simple and interactive manner,” said Jose Merino, Head of the Digital Agency for Public Innovation. “In other words, what we aim for is to democratize the access and use of information.” To achieve this goal a new tool for interactive data visualization called Sistema Ajolote was developed in open source and integrated into the Open Data Portal…

Information that had never been made public before, such as street-level crime from the Attorney General’s Office, is now accessible to everyone. Academics, businesses and civil society organizations can access the data to create solutions and innovations that complement the city’s new policies. One example is the successful “Hoyo de Crimen” app, which proposes safe travel routes based on the latest street-level crime data, enabling people to avoid crime hotspots as they walk or cycle through the city.

Since the introduction of the open data policy – which has contributed to a comprehensive crime reduction and social support strategy – high-impact crime in the city has decreased by 53%, and 43% of Mexico City residents now consider the city to be a safe place…(More)”.

Vulnerability and Data Protection Law


Book by Gianclaudio Malgieri: “Vulnerability has traditionally been viewed through the lens of specific groups of people, such as ethnic minorities, children, the elderly, or people with disabilities. With the rise of digital media, our perceptions of vulnerable groups and individuals have been reshaped as new vulnerabilities and different vulnerable sub-groups of users, consumers, citizens, and data subjects emerge.

Vulnerability and Data Protection Law not only depicts these problems but offers the reader a detailed investigation of the concept of data subjects and a reconceptualization of the notion of vulnerability within the General Data Protection Regulation. The regulation offers a forward-facing set of tools that-though largely underexplored-are essential in rebalancing power asymmetries and mitigating induced vulnerabilities in the age of artificial intelligence.

Considering the new risks and potentialities of the digital market, the new awareness about cognitive weaknesses, and the new philosophical sensitivity about the condition of human vulnerability, the author looks for a more general and layered definition of the data subject’s vulnerability that goes beyond traditional labels. In doing so, he seeks to promote a ‘vulnerability-aware’ interpretation of the GDPR.

A heuristic analysis that re-interprets the whole GDPR, this work is essential for both scholars of data protection law and for policymakers looking to strengthen regulations and protect the data of vulnerable individuals…(More)”.

Digital Sovereignty and Governance in the Data Economy: Data Trusteeship Instead of Property Rights on Data


Chapter by Ingrid Schneider: “This chapter challenges the current business models of the dominant platforms in the digital economy. In the search for alternatives, and towards the aim of achieving digital sovereignty, it proceeds in four steps: First, it discusses scholarly proposals to constitute a new intellectual property right on data. Second, it examines four models of data governance distilled from the literature that seek to see data administered (1) as a private good regulated by the market, (2) as a public good regulated by the state, (3) as a common good managed by a commons’ community, and (4) as a data trust supervised by means of stewardship by a trustee. Third, the strengths and weaknesses of each of these models, which are ideal types and serve as heuristics, are critically appraised. Fourth, data trusteeship which at present seems to be emerging as a promising implementation model for better data governance, is discussed in more detail, both in an empirical-descriptive way, by referring to initiatives in several countries, and analytically, by highlighting the challenges and pitfalls of data trusteeship…(More)”.

Can AI help governments clean out bureaucratic “Sludge”?


Blog by Abhi Nemani: “Government services often entail a plethora of paperwork and processes that can be exasperating and time-consuming for citizens. Whether it’s applying for a passport, filing taxes, or registering a business, chances are one has encountered some form of sludge.

Sludge is a term coined by Cass Sunstein, in his straightforward book, Sludge, a legal scholar and former administrator of the White House Office of Information and Regulatory Affairs, to describe unnecessarily effortful processes, bureaucratic procedures, and other barriers to desirable outcomes in government services…

So how can sludge be reduced or eliminated in government services? Sunstein suggests that one way to achieve this is to conduct Sludge Audits, which are systematic evaluations of the costs and benefits of existing or proposed sludge. He also recommends that governments adopt ethical principles and guidelines for the design and use of public services. He argues that by reducing sludge, governments can enhance the quality of life and well-being of their citizens.

One example of sludge reduction in government is the simplification and automation of tax filing in some countries. According to a study by the World Bank, countries that have implemented electronic tax filing systems have reduced the time and cost of tax compliance for businesses and individuals. The study also found that electronic tax filing systems have improved tax administration efficiency, transparency, and revenue collection. Some countries, such as Estonia and Chile, have gone further by pre-filling tax returns with information from various sources, such as employers, banks, and other government agencies. This reduces the burden on taxpayers to provide or verify data, and increases the accuracy and completeness of tax returns.

Future Opportunities for AI in Cutting Sludge

AI technology is rapidly evolving, and its potential applications are manifold. Here are a few opportunities for further AI deployment:

  • AI-assisted policy design: AI can analyze vast amounts of data to inform policy design, identifying areas of administrative burden and suggesting improvements.
  • Smart contracts and blockchain: These technologies could automate complex procedures, such as contract execution or asset transfer, reducing the need for paperwork.
  • Enhanced citizen engagement: AI could personalize government services, making them more accessible and less burdensome.

Key Takeaways:

  • AI could play a significant role in policy design, contract execution, and citizen engagement.
  • These technologies hold the potential to significantly reduce sludge…(More)”.

Use of AI in social sciences could mean humans will no longer be needed in data collection


Article by Michael Lee: A team of researchers from four Canadian and American universities say artificial intelligence could replace humans when it comes to collecting data for social science research.

Researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania published an article in the journal Science on June 15 about how AI, specifically large language models (LLMs), could affect their work.

“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” Igor Grossmann, professor of psychology at Waterloo and a co-author of the article, said in a news release.

Philip Tetlock, a psychology professor at UPenn and article co-author, goes so far as to say that LLMs will “revolutionize human-based forecasting” in just three years.

In their article, the authors pose the question: “How can social science research practices be adapted, even reinvented, to harness the power of foundational AI? And how can this be done while ensuring transparent and replicable research?”

The authors say the social sciences have traditionally relied on methods such as questionnaires and observational studies.

But with the ability of LLMs to pore over vast amounts of text data and generate human-like responses, the authors say this presents a “novel” opportunity for researchers to test theories about human behaviour at a faster rate and on a much larger scale.

Scientists could use LLMs to test theories in a simulated environment before applying them in the real world, the article says, or gather differing perspectives on a complex policy issue and generate potential solutions.

“It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90 per cent chance on that,” Tetlock said. “Of course, how humans react to all of that is another matter.”

One issue the authors identified, however, is that LLMs often learn to exclude sociocultural biases, raising the question of whether models are correctly reflecting the populations they study…(More)”

Artificial Intelligence for Emergency Response


Paper by Ayan Mukhopadhyay: “Emergency response management (ERM) is a challenge faced by communities across the globe. First responders must respond to various incidents, such as fires, traffic accidents, and medical emergencies. They must respond quickly to incidents to minimize the risk to human life. Consequently, considerable attention has been devoted to studying emergency incidents and response in the last several decades. In particular, data-driven models help reduce human and financial loss and improve design codes, traffic regulations, and safety measures. This tutorial paper explores four sub-problems within emergency response: incident prediction, incident detection, resource allocation, and resource dispatch. We aim to present mathematical formulations for these problems and broad frameworks for each problem. We also share open-source (synthetic) data from a large metropolitan area in the USA for future work on data-driven emergency response…(More)”.

Fighting poverty with synthetic data


Article by Jack Gisby, Anna Kiknadze, Thomas Mitterling, and Isabell Roitner-Fransecky: “If you have ever used a smartwatch or other wearable tech to track your steps, heart rate, or sleep, you are part of the “quantified self” movement. You are voluntarily submitting millions of intimate data points for collection and analysis. The Economist highlighted the benefits of good quality personal health and wellness data—increased physical activity, more efficient healthcare, and constant monitoring of chronic conditions. However, not everyone is enthusiastic about this trend. Many fear corporations will use the data to discriminate against the poor and vulnerable. For example, insurance firms could exclude patients based on preconditions obtained from personal data sharing.

Can we strike a balance between protecting the privacy of individuals and gathering valuable information? This blog explores applying a synthetic populations approach in New York City,  a city with an established reputation for using big data approaches to support urban management, including for welfare provisions and targeted policy interventions.

To better understand poverty rates at the census tract level, World Data Lab, with the support of the Sloan Foundation, generated a synthetic population based on the borough of Brooklyn. Synthetic populations rely on a combination of microdata and summary statistics:

  • Microdata consists of personal information at the individual level. In the U.S., such data is available at the Public Use Microdata Area (PUMA) level. PUMA are geographic areas partitioning the state, containing no fewer than 100,000 people each. However, due to privacy concerns, microdata is unavailable at the more granular census tract level. Microdata consists of both household and individual-level information, including last year’s household income, the household size, the number of rooms, and the age, sex, and educational attainment of each individual living in the household.
  • Summary statistics are based on populations rather than individuals and are available at the census tract level, given that there are fewer privacy concerns. Census tracts are small statistical subdivisions of a county, averaging about 4,000 inhabitants. In New York City, a census tract roughly equals a building block. Similar to microdata, summary statistics are available for individuals and households. On the census tract level, we know the total population, the corresponding demographic breakdown, the number of households within different income brackets, the number of households by number of rooms, and other similar variables…(More)”.