Paper by Nitin Kohli, Emily Aiken & Joshua E. Blumenstock: “Personal mobility data from mobile phones and other sensors are increasingly used to inform policymaking during pandemics, natural disasters, and other humanitarian crises. However, even aggregated mobility traces can reveal private information about individual movements to potentially malicious actors. This paper develops and tests an approach for releasing private mobility data, which provides formal guarantees over the privacy of the underlying subjects. Specifically, we (1) introduce an algorithm for constructing differentially private mobility matrices and derive privacy and accuracy bounds on this algorithm; (2) use real-world data from mobile phone operators in Afghanistan and Rwanda to show how this algorithm can enable the use of private mobility data in two high-stakes policy decisions: pandemic response and the distribution of humanitarian aid; and (3) discuss practical decisions that need to be made when implementing this approach, such as how to optimally balance privacy and accuracy. Taken together, these results can help enable the responsible use of private mobility data in humanitarian response…(More)”.
Using generative AI for crisis foresight
Article by Antonin Kenens and Josip Ivanovic: “What if the next time you discuss a complex future and its potential crises, it could be transformed from a typical meeting into an immersive experience? That’s exactly what we did at a recent strategy meeting of UNDP’s Crisis Bureau and Bureau for Policy and Programme Support.
In an environment where workshops and meetings can often feel monotonous, we aimed to break the mold. By using AI-generated videos, we brought our discussion to life, reflecting the realities of developing nations and immersing participants in the critical issues affecting our region. In today’s rapidly changing world, the ability to anticipate and prepare for potential crises is more crucial than ever. Crisis foresight involves identifying and analyzing possible future crises to develop strategies that can mitigate their impact. This proactive approach, highlighted multiple times in the pact for the future, is essential for effective governance and sustainable development in Europe and Central Asia and the rest of the world.
Our idea behind creating AI-generated videos was to provide a vivid, immersive experience that would engage viewers and stimulate active participation by sharing their reflections on the challenges and opportunities in developing countries. We presented fictional yet relatable scenarios to gather the participants of the meeting around a common view and create a sense of urgency and importance around UNDP’s strategic priorities and initiatives.
This approach not only captured attention but also sparked deeper engagement and thought-provoking conversations…(More)”.
What AI Can’t Do for Democracy
Essay by Daniel Berliner: “In short, there is increasing optimism among both theorists and practitioners over the potential for technology-enabled civic engagement to rejuvenate or deepen democracy. Is this optimism justified?
The answer depends on how we think about what civic engagement can do. Political representatives are often unresponsive to the preferences of ordinary people. Their misperceptions of public needs and preferences are partly to blame, but the sources of democratic dysfunction are much deeper and more structural than information alone. Working to ensure many more “citizens’ voices are truly heard” will thus do little to improve government responsiveness in contexts where the distribution of power means that policymakers have no incentive to do what citizens say. And as some critics have argued, it can even distract from recognizing and remedying other problems, creating a veneer of legitimacy—what health policy expert Sherry Arnstein once famously derided as mere “window dressing.”
Still, there are plenty of cases where contributions from citizens can highlight new problems that need addressing, new perspectives by which issues are understood, and new ideas for solving public problems—from administrative agencies seeking public input to city governments seeking to resolve resident complaints and citizens’ assemblies deliberating on climate policy. But even in these and other contexts, there is reason to doubt AI’s usefulness across the board. The possibilities of AI for civic engagement depend crucially on what exactly it is that policymakers want to learn from the public. For some types of learning, applications of AI can make major contributions to enhance the efficiency and efficacy of information processing. For others, there is no getting around the fundamental needs for human attention and context-specific knowledge in order to adequately make sense of public voices. We need to better understand these differences to avoid wasting resources on tools that might not deliver useful information…(More)”.
The Emergent Landscape of Data Commons: A Brief Survey and Comparison of Existing Initiatives
Article by Stefaan G. Verhulst and Hannah Chafetz: With the increased attention on the need for data to advance AI, data commons initiatives around the world are redefining how data can be accessed, and re-used for societal benefit. These initiatives focus on generating access to data from various sources for a public purpose and are governed by communities themselves. While diverse in focus–from health and mobility to language and environmental data–data commons are united by a common goal: democratizing access to data to fuel innovation and tackle global challenges.
This includes innovation in the context of artificial intelligence (AI). Data commons are providing the framework to make pools of diverse data available in machine understandable formats for responsible AI development and deployment. By providing access to high quality data sources with open licensing, data commons can help increase the quantity of training data in a less exploitative fashion, minimize AI providers’ reliance on data extracted across the internet without an open license, and increase the quality of the AI output (while reducing mis-information).
Over the last few months, the Open Data Policy Lab (a collaboration between The GovLab and Microsoft) has conducted various research initiatives to explore these topics further and understand:
(1) how the concept of a data commons is changing in the context of artificial intelligence, and
(2) current efforts to advance the next generation of data commons.
In what follows we provide a summary of our findings thus far. We hope it inspires more data commons use cases for responsible AI innovation in the public’s interest…(More)”.
Two Open Science Foundations: Data Commons and Stewardship as Pillars for Advancing the FAIR Principles and Tackling Planetary Challenges
Article by Stefaan Verhulst and Jean Claude Burgelman: “Today the world is facing three major planetary challenges: war and peace, steering Artificial Intelligence and making the planet a healthy Anthropoceen. As they are closely interrelated, they represent an era of “polycrisis”, to use the term Adam Tooze has coined. There are no simple solutions or quick fixes to these (and other) challenges; their interdependencies demand a multi-stakeholder, interdisciplinary approach.
As world leaders and experts convene in Baku for The 29th session of the Conference of the Parties to the United Nations Framework Convention on Climate Change (COP29), the urgency of addressing these global crises has never been clearer. A crucial part of addressing these challenges lies in advancing science — particularly open science, underpinned by data made available leveraging the FAIR principles (Findable, Accessible, Interoperable, and Reusable). In this era of computation, the transformative potential of research depends on the seamless flow and reuse of high-quality data to unlock breakthrough insights and solutions. Ensuring data is available in reusable, interoperable formats not only accelerates the pace of scientific discovery but also expedites the search for solutions to global crises.
While FAIR principles provide a vital foundation for making data accessible, interoperable and reusable, translating these principles into practice requires robust institutional approaches. Toward that end, in the below, we argue two foundational pillars must be strengthened:
- Establishing Data Commons: The need for shared data ecosystems where resources can be pooled, accessed, and re-used collectively, breaking down silos and fostering cross-disciplinary collaboration.
- Enabling Data Stewardship: Systematic and responsible data reuse requires more than access; it demands stewardship — equipping institutions and scientists with the capabilities to maximize the value of data while safeguarding its responsible use is essential…(More)”.
A Second Academic Exodus From X?
Article by Josh Moody: “Two years ago, after Elon Musk bought Twitter for $44 billion, promptly renaming it X, numerous academics decamped from the platform. Now, in the wake of a presidential election fraught with online disinformation, a second exodus from the social media site appears underway.
Academics, including some with hundreds of thousands of followers, announced departures from the platform in the immediate aftermath of the election, decrying the toxicity of the website and objections to Musk and how he wielded the platform to back President-elect Donald Trump. The business mogul threw millions of dollars behind Trump and personally campaigned for him this fall. Musk also personally advanced various debunked conspiracy theories during the election cycle.
Amid another wave of exits, some users see this as the end of Academic Twitter, which was already arguably in its death throes…
LeBlanc, Kamola and Rosen all mentioned that they were moving to the platform Bluesky, which has grown to 14.5 million users, welcoming more than 700,000 new accounts in recent days. In September, Bluesky had nine million users…
A study published in PS: Political Science & Politics last month concluded that academics began to engage less after Musk bought the platform. But the peak of disengagement wasn’t when the billionaire took over the site in October 2022 but rather the next month, when he reinstated Donald Trump’s account, which the platform’s previous owners deactivated following the Jan. 6, 2021, insurrection, which he encouraged.
The researchers reviewed 15,700 accounts from academics in economics, political science, sociology and psychology for their study.
James Bisbee, a political science professor at Vanderbilt University and article co-author, wrote via email that changes to the platform, particularly to the application programming interface, or API, undermined their ability to collect data for their research.
“Twitter used to be an amazing source of data for political scientists (and social scientists more broadly) thanks in part to its open data ethos,” Bisbee wrote. “Since Musk’s takeover, this is no longer the case, severely limiting the types of conclusions we could draw, and theories we could test, on this platform.”
To Bisbee, that loss is an understated issue: “Along with many other troubling developments on X since the change in ownership, the amputation of data access should not be ignored.”..(More)”
The Death of Search
Article by Matteo Wong: “For nearly two years, the world’s biggest tech companies have said that AI will transform the web, your life, and the world. But first, they are remaking the humble search engine.
Chatbots and search, in theory, are a perfect match. A standard Google search interprets a query and pulls up relevant results; tech companies have spent tens or hundreds of millions of dollars engineering chatbots that interpret human inputs, synthesize information, and provide fluent, useful responses. No more keyword refining or scouring Wikipedia—ChatGPT will do it all. Search is an appealing target, too: Shaping how people navigate the internet is tantamount to shaping the internet itself.
Months of prophesying about generative AI have now culminated, almost all at once, in what may be the clearest glimpse yet into the internet’s future. After a series of limited releases and product demos, mired with various setbacks and embarrassing errors, tech companies are debuting AI-powered search engines as fully realized, all-inclusive products. Last Monday, Google announced that it would launch its AI Overviews in more than 100 new countries; that feature will now reach more than 1 billion users a month. Days later, OpenAI announced a new search function in ChatGPT, available to paid users for now and soon opening to the public. The same afternoon, the AI-search start-up Perplexity shared instructions for making its “answer engine” the default search tool in your web browser.
For the past week, I have been using these products in a variety of ways: to research articles, follow the election, and run everyday search queries. In turn I have scried, as best I can, into the future of how billions of people will access, relate to, and synthesize information. What I’ve learned is that these products are at once unexpectedly convenient, frustrating, and weird. These tools’ current iterations surprised and, at times, impressed me, yet even when they work perfectly, I’m not convinced that AI search is a wise endeavor…(More)”.
Who Is Responsible for AI Copyright Infringement?
Article by Michael P. Goodyear: “Twenty-one-year-old college student Shane hopes to write a song for his boyfriend. In the past, Shane would have had to wait for inspiration to strike, but now he can use generative artificial intelligence to get a head start. Shane decides to use Anthropic’s AI chat system, Claude, to write the lyrics. Claude dutifully complies and creates the words to a love song. Shane, happy with the result, adds notes, rhythm, tempo, and dynamics. He sings the song and his boyfriend loves it. Shane even decides to post a recording to YouTube, where it garners 100,000 views.
But Shane did not realize that this song’s lyrics are similar to those of “Love Story,” Taylor Swift’s hit 2008 song. Shane must now contend with copyright law, which protects original creative expression such as music. Copyright grants the rights owner the exclusive rights to reproduce, perform, and create derivatives of the copyrighted work, among other things. If others take such actions without permission, they can be liable for damages up to $150,000. So Shane could be on the hook for tens of thousands of dollars for copying Swift’s song.
Copyright law has surged into the news in the past few years as one of the most important legal challenges for generative AI tools like Claude—not for the output of these tools but for how they are trained. Over two dozen pending court cases grapple with the question of whether training generative AI systems on copyrighted works without compensating or getting permission from the creators is lawful or not. Answers to this question will shape a burgeoning AI industry that is predicted to be worth $1.3 trillion by 2032.
Yet there is another important question that few have asked: Who should be liable when a generative AI system creates a copyright-infringing output? Should the user be on the hook?…(More)”
From Digital Sovereignty to Digital Agency
Article by Akash Kapur: “In recent years, governments have increasingly pursued variants of digital sovereignty to regulate and control the global digital ecosystem. The pursuit of AI sovereignty represents the latest iteration in this quest.
Digital sovereignty may offer certain benefits, but it also poses undeniable risks, including the possibility of undermining the very goals of autonomy and self-reliance that nations are seeking. These risks are particularly pronounced for smaller nations with less capacity, which might do better in a revamped, more inclusive, multistakeholder system of digital governance.
Organizing digital governance around agency rather than sovereignty offers the possibility of such a system. Rather than reinforce the primacy of nations, digital agency asserts the rights, priorities, and needs not only of sovereign governments but also of the constituent parts—the communities and individuals—they purport to represent.
Three cross-cutting principles underlie the concept of digital agency: recognizing stakeholder multiplicity, enhancing the latent possibilities of technology, and promoting collaboration. These principles lead to three action-areas that offer a guide for digital policymakers: reinventing institutions, enabling edge technologies, and building human capacity to ensure technical capacity…(More)”.
OECD Digital Economy Outlook 2024
OECD Report: “The most recent phase of digital transformation is marked by rapid technological changes, creating both opportunities and risks for the economy and society. The Volume 2 of the OECD Digital Economy Outlook 2024 explores emerging priorities, policies and governance practices across countries. It also examines trends in the foundations that enable digital transformation, drive digital innovation and foster trust in the digital age. The volume concludes with a statistical annex…
In 2023, digital government, connectivity and skills topped the list of digital policy priorities. Increasingly developed at a high level of government, national digital strategies play a critical role in co-ordinating these efforts. Nearly half of the 38 countries surveyed develop these strategies through dedicated digital ministries, up from just under a quarter in 2016. Among 1 200 policy initiatives tracked across the OECD, one-third aim to boost digital technology adoption, social prosperity, and innovation. AI and 5G are the most often-cited technologies…(More)”