Barred From Grocery Stores by Facial Recognition


Article by Adam Satariano and Kashmir Hill: “Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.

On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.

“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.

Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.

Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave…(More)”.

Gamifying medical data labeling to advance AI


Article by Zach Winn: “…Duhaime began exploring ways to leverage collective intelligence to improve medical diagnoses. In one experiment, he trained groups of lay people and medical school students that he describes as “semiexperts” to classify skin conditions, finding that by combining the opinions of the highest performers he could outperform professional dermatologists. He also found that by combining algorithms trained to detect skin cancer with the opinions of experts, he could outperform either method on its own….The DiagnosUs app, which Duhaime developed with Centaur co-founders Zach Rausnitz and Tom Gellatly, is designed to help users test and improve their skills. Duhaime says about half of users are medical school students and the other half are mostly doctors, nurses, and other medical professionals…

The approach stands in sharp contrast to traditional data labeling and AI content moderation, which are typically outsourced to low-resource countries.

Centaur’s approach produces accurate results, too. In a paper with researchers from Brigham and Women’s Hospital, Massachusetts General Hospital (MGH), and Eindhoven University of Technology, Centaur showed its crowdsourced opinions labeled lung ultrasounds as reliably as experts did…

Centaur has found that the best performers come from surprising places. In 2021, to collect expert opinions on EEG patterns, researchers held a contest through the DiagnosUs app at a conference featuring about 50 epileptologists, each with more than 10 years of experience. The organizers made a custom shirt to give to the contest’s winner, who they assumed would be in attendance at the conference.

But when the results came in, a pair of medical students in Ghana, Jeffery Danquah and Andrews Gyabaah, had beaten everyone in attendance. The highest-ranked conference attendee had come in ninth…(More)”

Why picking citizens at random could be the best way to govern the A.I. revolution


Article by Hélène Landemore, Andrew Sorota, and Audrey Tang: “Testifying before Congress last month about the risks of artificial intelligence, Sam Altman, the OpenAI CEO behind the massively popular large language model (LLM) ChatGPT, and Gary Marcus, a psychology professor at NYU famous for his positions against A.I. utopianism, both agreed on one point: They called for the creation of a government agency comparable to the FDA to regulate A.I. Marcus also suggested scientific experts should be given early access to new A.I. prototypes to be able to test them before they are released to the public.

Strikingly, however, neither of them mentioned the public, namely the billions of ordinary citizens around the world that the A.I. revolution, in all its uncertainty, is sure to affect. Don’t they also deserve to be included in decisions about the future of this technology?

We believe a global, democratic approach–not an exclusively technocratic one–is the only adequate answer to what is a global political and ethical challenge. Sam Altman himself stated in an earlier interview that in his “dream scenario,” a global deliberation involving all humans would be used to figure out how to govern A.I.

There are already proofs of concept for the various elements that a global, large-scale deliberative process would require in practice. By drawing on these diverse and complementary examples, we can turn this dream into a reality.

Deliberations based on random selection have grown in popularity on the local and national levels, with close to 600 cases documented by the OECD in the last 20 years. Their appeal lies in capturing a unique array of voices and lived experiences, thereby generating policy recommendations that better track the preferences of the larger population and are more likely to be accepted. Famous examples include the 2012 and 2016 Irish citizens’ assemblies on marriage equality and abortion, which led to successful referendums and constitutional change, as well as the 2019 and 2022 French citizens’ conventions on climate justice and end-of-life issues.

Taiwan has successfully experimented with mass consultations through digital platforms like Pol.is, which employs machine learning to identify consensus among vast numbers of participants. Digitally engaged participation has helped aggregate public opinion on hundreds of polarizing issues in Taiwan–such as regulating Uber–involving half of its 23.5 million people. Digital participation can also augment other smaller-scale forms of citizen deliberations, such as those taking place in person or based on random selection…(More)”.

Artificial Intelligence for Emergency Response


Paper by Ayan Mukhopadhyay: “Emergency response management (ERM) is a challenge faced by communities across the globe. First responders must respond to various incidents, such as fires, traffic accidents, and medical emergencies. They must respond quickly to incidents to minimize the risk to human life. Consequently, considerable attention has been devoted to studying emergency incidents and response in the last several decades. In particular, data-driven models help reduce human and financial loss and improve design codes, traffic regulations, and safety measures. This tutorial paper explores four sub-problems within emergency response: incident prediction, incident detection, resource allocation, and resource dispatch. We aim to present mathematical formulations for these problems and broad frameworks for each problem. We also share open-source (synthetic) data from a large metropolitan area in the USA for future work on data-driven emergency response…(More)”.

Engaging citizens in innovation policy. Why, when and how?


OECD Report: “Innovation policies need to be socially embedded for them to effectively contribute to addressing major societal challenges. Engaging citizens in innovation policymaking can help define long-term policy priorities, enhance the quality and legitimacy of policy decisions, and increase the visibility of innovation in society. However, engaging all groups in society and effectively integrating citizens’ inputs in policy processes is challenging. This paper discusses why, when and how to engage citizens in innovation policy making. It also addresses practical considerations for organising these processes, such as reaching out to diverse publics and selecting the optimal mix of methods and tools…(More)”.

Local Data Spaces: Leveraging trusted research environments for secure location-based policy research


Paper by Jacob L. Macdonald, Mark A. Green, Maurizio Gibin, Simon Leech, Alex Singleton and Paul Longely: “This work explores the use of Trusted Research Environments for the secure analysis of sensitive, record-level data on local coronavirus disease-2019 (COVID-19) inequalities and economic vulnerabilities. The Local Data Spaces (LDS) project was a targeted rapid response and cross-disciplinary collaborative initiative using the Office for National Statistics’ Secure Research Service for localized comparison and analysis of health and economic outcomes over the course of the COVID-19 pandemic. Embedded researchers worked on co-producing a range of locally focused insights and reports built on secure secondary data and made appropriately open and available to the public and all local stakeholders for wider use. With secure infrastructure and overall data governance practices in place, accredited researchers were able to access a wealth of detailed data and resources to facilitate more targeted local policy analysis. Working with data within such infrastructure as part of a larger research project involved advanced planning and coordination to be efficient. As new and novel granular data resources become securely available (e.g., record-level administrative digital health records or consumer data), a range of local policy insights can be gained across issues of public health or local economic vitality. Many of these new forms of data however often come with a large degree of sensitivity around issues of personal identifiability and how the data is used for public-facing research and require secure and responsible use. Learning to work appropriately with secure data and research environments can open up many avenues for collaboration and analysis…(More)”

Systems Thinking, Big Data and Public Policy


Article by Mauricio Covarrubias: “Systems thinking and big data analysis are two fundamental tools in the formulation of public policies due to their potential to provide a more comprehensive and evidence-based understanding of the problems and challenges that a society faces.

Systems thinking is important in the formulation of public policies because it allows for a holistic and integrated approach to addressing the complex challenges and issues that a society faces. According to Ilona Kickbusch and David Gleicher, “Addressing wicked problems requires a high level of systems thinking. If there is a single lesson to be drawn from the first decade of the 21st century, it is that surprise, instability and extraordinary change will continue to be regular features of our lives.”

Public policies often involve multiple stakeholders, interrelated factors and unintended consequences, which require a deep understanding of how the system as a whole operates. Systems thinking enables policymakers to identify the key factors that influence a problem and how they relate to each other, enabling them to develop solutions that more effectively address the issues. Instead of trying to address a problem in isolation, systems thinking considers the problem as part of a whole and seeks solutions that address the root causes.

Additionally, systems thinking helps policymakers anticipate the unintended consequences of their decisions and actions. By understanding how different components of the system interact, they can predict the possible side effects of a policy in other areas. This can help avoid decisions that have unintended consequences…(More)”.

The A.I. Revolution Will Change Work. Nobody Agrees How.


Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”

But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.

But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.

It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.

In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?

When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.

In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.

“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”

But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?

It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.

Politicians love to appeal to common sense – but does it trump expertise?


Essay by Magda Osman: “Politicians love to talk about the benefits of “common sense” – often by pitting it against the words of “experts and elites”. But what is common sense? Why do politicians love it so much? And is there any evidence that it ever trumps expertise? Psychology provides a clue.

We often view common sense as an authority of collective knowledge that is universal and constant, unlike expertise. By appealing to the common sense of your listeners, you therefore end up on their side, and squarely against the side of the “experts”. But this argument, like an old sock, is full of holes.

Experts have gained knowledge and experience in a given speciality. In which case politicians are experts as well. This means a false dichotomy is created between the “them” (let’s say scientific experts) and “us” (non-expert mouthpieces of the people).

Common sense is broadly defined in research as a shared set of beliefs and approaches to thinking about the world. For example, common sense is often used to justify that what we believe is right or wrong, without coming up with evidence.

But common sense isn’t independent of scientific and technological discoveries. Common sense versus scientific beliefs is therefore also a false dichotomy. Our “common” beliefs are informed by, and inform, scientific and technology discoveries…

The idea that common sense is universal and self-evident because it reflects the collective wisdom of experience – and so can be contrasted with scientific discoveries that are constantly changing and updated – is also false. And the same goes for the argument that non-experts tend to view the world the same way through shared beliefs, while scientists never seem to agree on anything.

Just as scientific discoveries change, common sense beliefs change over time and across cultures. They can also be contradictory: we are told “quit while you are ahead” but also “winners never quit”, and “better safe than sorry” but “nothing ventured nothing gained”…(More)”

Detecting Human Rights Violations on Social Media during Russia-Ukraine War


Paper by Poli Nemkova, et al: “The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an F2 score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations…(More)” See also Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War