Privacy guarantees for personal mobility data in humanitarian response


Paper by Nitin Kohli, Emily Aiken & Joshua E. Blumenstock: “Personal mobility data from mobile phones and other sensors are increasingly used to inform policymaking during pandemics, natural disasters, and other humanitarian crises. However, even aggregated mobility traces can reveal private information about individual movements to potentially malicious actors. This paper develops and tests an approach for releasing private mobility data, which provides formal guarantees over the privacy of the underlying subjects. Specifically, we (1) introduce an algorithm for constructing differentially private mobility matrices and derive privacy and accuracy bounds on this algorithm; (2) use real-world data from mobile phone operators in Afghanistan and Rwanda to show how this algorithm can enable the use of private mobility data in two high-stakes policy decisions: pandemic response and the distribution of humanitarian aid; and (3) discuss practical decisions that need to be made when implementing this approach, such as how to optimally balance privacy and accuracy. Taken together, these results can help enable the responsible use of private mobility data in humanitarian response…(More)”.

Review of relevance of the OECD Recommendation on ICTs and the Environment


OECD Policy Report: “The OECD Recommendation on Information and Communication Technologies (ICTs) and the Environment was adopted in 2010 and recognised the link between digital technologies and environmental sustainability. Today, advances in digital technologies underscore their growing role in achieving climate resilience. At the same time, digital technologies and their underlying infrastructure have an environmental footprint that must be managed. This report takes stock of technology and policy developments since the adoption of the Recommendation and provides a gap analysis and assessment of its relevance, concluding that the Recommendation remains relevant and identifying areas for revision…(More)”.

Artificial Intelligence and the Future of Work


Report by the National Academies: “AI technology is at an inflection point: a surge of technological progress has driven the rapid development and adoption of generative AI systems, such as ChatGPT, which are capable of generating text, images, or other content based on user requests.

This technical progress is likely to continue in coming years, with the potential to complement or replace human labor in certain tasks and reshape job markets. However, it is difficult to predict exactly which new AI capabilities might emerge, and when these advances might occur.

This National Academies’ report evaluates recent advances in AI technology and their implications for economic productivity, job stability, and income inequality, identifying research opportunities and data needs to equip workers and policymakers to flexibly respond to AI developments…(More)”

Using generative AI for crisis foresight


Article by Antonin Kenens and Josip Ivanovic: “What if the next time you discuss a complex future and its potential crises, it could be transformed from a typical meeting into an immersive experience? That’s exactly what we did at a recent strategy meeting of UNDP’s Crisis Bureau and Bureau for Policy and Programme Support.  

In an environment where workshops and meetings can often feel monotonous, we aimed to break the mold. By using AI-generated videos, we brought our discussion to life, reflecting the realities of developing nations and immersing participants in the critical issues affecting our region.  In today’s rapidly changing world, the ability to anticipate and prepare for potential crises is more crucial than ever. Crisis foresight involves identifying and analyzing possible future crises to develop strategies that can mitigate their impact. This proactive approach, highlighted multiple times in the pact for the future, is essential for effective governance and sustainable development in Europe and Central Asia and the rest of the world.

graphical user interface
Visualization of the consequences of pollution in Joraland.

Our idea behind creating AI-generated videos was to provide a vivid, immersive experience that would engage viewers and stimulate active participation by sharing their reflections on the challenges and opportunities in developing countries. We presented fictional yet relatable scenarios to gather the participants of the meeting around a common view and create a sense of urgency and importance around UNDP’s strategic priorities and initiatives. 

This approach not only captured attention but also sparked deeper engagement and thought-provoking conversations…(More)”.

What AI Can’t Do for Democracy


Essay by Daniel Berliner: “In short, there is increasing optimism among both theorists and practitioners over the potential for technology-enabled civic engagement to rejuvenate or deepen democracy. Is this optimism justified?

The answer depends on how we think about what civic engagement can do. Political representatives are often unresponsive to the preferences of ordinary people. Their misperceptions of public needs and preferences are partly to blame, but the sources of democratic dysfunction are much deeper and more structural than information alone. Working to ensure many more “citizens’ voices are truly heard” will thus do little to improve government responsiveness in contexts where the distribution of power means that policymakers have no incentive to do what citizens say. And as some critics have argued, it can even distract from recognizing and remedying other problems, creating a veneer of legitimacy—what health policy expert Sherry Arnstein once famously derided as mere “window dressing.”

Still, there are plenty of cases where contributions from citizens can highlight new problems that need addressingnew perspectives by which issues are understood, and new ideas for solving public problems—from administrative agencies seeking public input to city governments seeking to resolve resident complaints and citizens’ assemblies deliberating on climate policy. But even in these and other contexts, there is reason to doubt AI’s usefulness across the board. The possibilities of AI for civic engagement depend crucially on what exactly it is that policymakers want to learn from the public. For some types of learning, applications of AI can make major contributions to enhance the efficiency and efficacy of information processing. For others, there is no getting around the fundamental needs for human attention and context-specific knowledge in order to adequately make sense of public voices. We need to better understand these differences to avoid wasting resources on tools that might not deliver useful information…(More)”.

The Emergent Landscape of Data Commons: A Brief Survey and Comparison of Existing Initiatives


Article by Stefaan G. Verhulst and Hannah Chafetz: With the increased attention on the need for data to advance AI, data commons initiatives around the world are redefining how data can be accessed, and re-used for societal benefit. These initiatives focus on generating access to data from various sources for a public purpose and are governed by communities themselves. While diverse in focus–from health and mobility to language and environmental data–data commons are united by a common goal: democratizing access to data to fuel innovation and tackle global challenges.

This includes innovation in the context of artificial intelligence (AI). Data commons are providing the framework to make pools of diverse data available in machine understandable formats for responsible AI development and deployment. By providing access to high quality data sources with open licensing, data commons can help increase the quantity of training data in a less exploitative fashion, minimize AI providers’ reliance on data extracted across the internet without an open license, and increase the quality of the AI output (while reducing mis-information).

Over the last few months, the Open Data Policy Lab (a collaboration between The GovLab and Microsoft) has conducted various research initiatives to explore these topics further and understand:

(1) how the concept of a data commons is changing in the context of artificial intelligence, and

(2) current efforts to advance the next generation of data commons.

In what follows we provide a summary of our findings thus far. We hope it inspires more data commons use cases for responsible AI innovation in the public’s interest…(More)”.

Two Open Science Foundations: Data Commons and Stewardship as Pillars for Advancing the FAIR Principles and Tackling Planetary Challenges


Article by Stefaan Verhulst and Jean Claude Burgelman: “Today the world is facing three major planetary challenges: war and peace, steering Artificial Intelligence and making the planet a healthy Anthropoceen. As they are closely interrelated, they represent an era of “polycrisis”, to use the term Adam Tooze has coined. There are no simple solutions or quick fixes to these (and other) challenges; their interdependencies demand a multi-stakeholder, interdisciplinary approach.

As world leaders and experts convene in Baku for The 29th session of the Conference of the Parties to the United Nations Framework Convention on Climate Change (COP29), the urgency of addressing these global crises has never been clearer. A crucial part of addressing these challenges lies in advancing science — particularly open science, underpinned by data made available leveraging the FAIR principles (Findable, Accessible, Interoperable, and Reusable). In this era of computation, the transformative potential of research depends on the seamless flow and reuse of high-quality data to unlock breakthrough insights and solutions. Ensuring data is available in reusable, interoperable formats not only accelerates the pace of scientific discovery but also expedites the search for solutions to global crises.

Image of the retreat of the Columbia glacier by Jesse Allen, using Landsat data from the U.S. Geological Survey. Free to re-use from NASA Visible Earth.

While FAIR principles provide a vital foundation for making data accessible, interoperable and reusable, translating these principles into practice requires robust institutional approaches. Toward that end, in the below, we argue two foundational pillars must be strengthened:

  • Establishing Data Commons: The need for shared data ecosystems where resources can be pooled, accessed, and re-used collectively, breaking down silos and fostering cross-disciplinary collaboration.
  • Enabling Data Stewardship: Systematic and responsible data reuse requires more than access; it demands stewardship — equipping institutions and scientists with the capabilities to maximize the value of data while safeguarding its responsible use is essential…(More)”.

A Second Academic Exodus From X?


Article by Josh Moody: “Two years ago, after Elon Musk bought Twitter for $44 billion, promptly renaming it X, numerous academics decamped from the platform. Now, in the wake of a presidential election fraught with online disinformation, a second exodus from the social media site appears underway.

Academics, including some with hundreds of thousands of followers, announced departures from the platform in the immediate aftermath of the election, decrying the toxicity of the website and objections to Musk and how he wielded the platform to back President-elect Donald Trump. The business mogul threw millions of dollars behind Trump and personally campaigned for him this fall. Musk also personally advanced various debunked conspiracy theories during the election cycle.

Amid another wave of exits, some users see this as the end of Academic Twitter, which was already arguably in its death throes…

LeBlanc, Kamola and Rosen all mentioned that they were moving to the platform Bluesky, which has grown to 14.5 million users, welcoming more than 700,000 new accounts in recent days. In September, Bluesky had nine million users…

A study published in PS: Political Science & Politics last month concluded that academics began to engage less after Musk bought the platform. But the peak of disengagement wasn’t when the billionaire took over the site in October 2022 but rather the next month, when he reinstated Donald Trump’s account, which the platform’s previous owners deactivated following the Jan. 6, 2021, insurrection, which he encouraged.

The researchers reviewed 15,700 accounts from academics in economics, political science, sociology and psychology for their study.

James Bisbee, a political science professor at Vanderbilt University and article co-author, wrote via email that changes to the platform, particularly to the application programming interface, or API, undermined their ability to collect data for their research.

“Twitter used to be an amazing source of data for political scientists (and social scientists more broadly) thanks in part to its open data ethos,” Bisbee wrote. “Since Musk’s takeover, this is no longer the case, severely limiting the types of conclusions we could draw, and theories we could test, on this platform.”

To Bisbee, that loss is an understated issue: “Along with many other troubling developments on X since the change in ownership, the amputation of data access should not be ignored.”..(More)”

The Death of Search


Article by Matteo Wong: “For nearly two years, the world’s biggest tech companies have said that AI will transform the web, your life, and the world. But first, they are remaking the humble search engine.

Chatbots and search, in theory, are a perfect match. A standard Google search interprets a query and pulls up relevant results; tech companies have spent tens or hundreds of millions of dollars engineering chatbots that interpret human inputs, synthesize information, and provide fluent, useful responses. No more keyword refining or scouring Wikipedia—ChatGPT will do it all. Search is an appealing target, too: Shaping how people navigate the internet is tantamount to shaping the internet itself.

Months of prophesying about generative AI have now culminated, almost all at once, in what may be the clearest glimpse yet into the internet’s future. After a series of limited releases and product demos, mired with various setbacks and embarrassing errors, tech companies are debuting AI-powered search engines as fully realized, all-inclusive products. Last Monday, Google announced that it would launch its AI Overviews in more than 100 new countries; that feature will now reach more than 1 billion users a month. Days later, OpenAI announced a new search function in ChatGPT, available to paid users for now and soon opening to the public. The same afternoon, the AI-search start-up Perplexity shared instructions for making its “answer engine” the default search tool in your web browser.

For the past week, I have been using these products in a variety of ways: to research articles, follow the election, and run everyday search queries. In turn I have scried, as best I can, into the future of how billions of people will access, relate to, and synthesize information. What I’ve learned is that these products are at once unexpectedly convenient, frustrating, and weird. These tools’ current iterations surprised and, at times, impressed me, yet even when they work perfectly, I’m not convinced that AI search is a wise endeavor…(More)”.

Congress should designate an entity to oversee data security, GAO says


Article by Matt Bracken: “Federal agencies may need to rethink how they handle individuals’ personal data to protect their civil rights and civil liberties, a congressional watchdog said in a new report Tuesday.

Without federal guidance governing the protection of the public’s civil rights and liberties, agencies have pursued a patchwork system of policies tied to the collection, sharing and use of data, the Government Accountability Office said

To address that problem head-on, the GAO is recommending that Congress select “an appropriate federal entity” to produce guidance or regulations regarding data protection that would apply to all agencies, giving that entity “the explicit authority to make needed technical and policy choices or explicitly stating Congress’s own choices.”

That recommendation was formed after the GAO sent a questionnaire to all 24 Chief Financial Officers Act agencies asking for information about their use of emerging technologies and data capabilities and how they’re guaranteeing that personally identifiable information is safeguarded.

The GAO found that 16 of those CFO Act agencies have policies or procedures in place to protect civil rights and civil liberties with regard to data use, while the other eight have not taken steps to do the same.

The most commonly cited issues for agencies in their efforts to protect the civil rights and civil liberties of the public were “complexities in handling protections associated with new and emerging technologies” and “a lack of qualified staff possessing needed skills in civil rights, civil liberties, and emerging technologies.”

“Further, eight of the 24 agencies believed that additional government-wide law or guidance would strengthen consistency in addressing civil rights and civil liberties protections,” the GAO wrote. “One agency noted that such guidance could eliminate the hodge-podge approach to the governance of data and technology.”

All 24 CFO Act agencies have internal offices to “handle the protection of the public’s civil rights as identified in federal laws,” with much of that work centered on the handling of civil rights violations and related complaints. Four agencies — the departments of Defense, Homeland Security, Justice and Education — have offices to specifically manage civil liberty protections across their entire agencies. The other 20 agencies have mostly adopted a “decentralized approach to protecting civil liberties, including when collecting, sharing, and using data,” the GAO noted…(More)”.