Paper by Masanori Arita: “There are ethical, legal, and governance challenges surrounding data, particularly in the context of digital sequence information (DSI) on genetic resources. I focus on the shift in the international framework, as exemplified by the CBD-COP15 decision on benefit-sharing from DSI and discuss the growing significance of data sovereignty in the age of AI and synthetic biology. Using the example of the COVID-19 pandemic, the tension between open science principles and data control rights is explained. This opinion also highlights the importance of inclusive and equitable data sharing frameworks that respect both privacy and sovereign data rights, stressing the need for international cooperation and equitable access to data to reduce global inequalities in scientific and technological advancement…(More)”.
Critical Data Studies: An A to Z Guide to Concepts and Methods
Book by Rob Kitchin: “Critical Data Studies has come of age as a vibrant, interdisciplinary field of study. Taking data as its primary analytical focus, the field theorises the nature of data; examines how data are produced, managed, governed and shared; investigates how they are used to make sense of the world and to perform practical action; and explores whose agenda data-driven systems serve.
This book is the first comprehensive A-Z guide to the concepts and methods of Critical Data Studies, providing succinct definitions and descriptions of over 400 key terms, along with suggested further reading. The book enables readers to quickly navigate and improve their comprehension of the field, while also acting as a guide for discovering ideas and methods that will be of value in their own studies…(More)”
Introduction to the Foundations and Regulation of Generative AI
Chapter by Philipp Hacker, Andreas Engel, Sarah Hammer and Brent Mittelstadt: “… introduces The Oxford Handbook of the Foundations and Regulation of Generative AI, outlining the key themes and questions surrounding the technical development, regulatory governance, and societal implications of generative AI. It highlights the historical context of generative AI, distinguishes it from traditional AI, and explores its diverse applications across multiple domains, including text, images, music, and scientific discovery. The discussion critically assesses whether generative AI represents a paradigm shift or a temporary hype. Furthermore, the chapter extensively surveys both emerging and established regulatory frameworks, including the EU AI Act, the GDPR, privacy and personality rights, and copyright, as well as global legal responses. We conclude that, for now, the “Old Guard” of legal frameworks regulates generative AI more tightly and effectively than the “Newcomers,” but that may change as the new laws fully kick in. The chapter concludes by mapping the structure of the Handbook…(More)”
Gather, Share, Build
Article by Nithya Ramanathan & Jim Fruchterman: “Recent milestones in generative AI have sent nonprofits, social enterprises, and funders alike scrambling to understand how these innovations can be harnessed for global good. Along with this enthusiasm, there is also warranted concern that AI will greatly increase the digital divide and fail to improve the lives of 90 percent of the people on our planet. The current focus on funding AI intelligently and strategically in the social sector is critical, and it will help ensure that money has the largest impact.
So how can the social sector meet the current moment?
AI is already good at a lot of things. Plenty of social impact organizations are using AI right now, with positive results. Great resources exist for developing a useful understanding of the current landscape and how existing AI tech can serve your mission, including this report from Stanford HAI and Project Evident and this AI Treasure Map for Nonprofits from Tech Matters.
While some tech-for-good companies are creating AI and thriving—Digital Green, Khan Academy, and Jacaranda Health, among many—most social sector companies are not ready to build AI solutions. But even organizations that don’t have AI on their radar need to be thinking about how to address one of the biggest challenges to harnessing AI to solve social sector problems: insufficient data…(More)”.
Advanced Flood Hub features for aid organizations and govern
Announcement by Alex Diaz: “Floods continue to devastate communities worldwide, and many are pursuing advancements in AI-driven flood forecasting, enabling faster, more efficient detection and response. Over the past few years, Google Research has focused on harnessing AI modeling and satellite imagery to dramatically accelerate the reliability of flood forecasting — while working with partners to expand coverage for people in vulnerable communities around the world.
Today, we’re rolling out new advanced features in Flood Hub designed to allow experts to understand flood risk in a given region via inundation history maps, and to understand how a given flood forecast on Flood Hub might propagate throughout a river basin. With the inundation history maps, Flood Hub expert users can view flood risk areas in high resolution over the map regardless of a current flood event. This is useful for cases where our flood forecasting does not include real time inundation maps or for pre-planning of humanitarian work. You can find more explanations about the inundation history maps and more in the Flood Hub Help Center…(More)”.
What 40 Million Devices Can Teach Us About Digital Literacy in America
Blog by Juan M. Lavista Ferres: “…For the first time, Microsoft is releasing a privacy-protected dataset that provides new insights into digital engagement across the United States. This dataset, built from anonymized usage data from 40 million Windows devices, offers the most comprehensive view ever assembled of how digital tools are being used across the country. It goes beyond surveys and self-reported data to provide a real-world look at software application usage across 28,000 ZIP codes, creating a more detailed and nuanced understanding of digital engagement than any existing commercial or government study.
In collaboration with leading researchers at Harvard University and the University of Pennsylvania, we analyzed this dataset and developed two key indices to measure digital literacy:
- Media & Information Composite Index (MCI): This index captures general computing activity, including media consumption, information gathering, and usage of productivity applications like word processing, spreadsheets, and presentations.
- Content Creation & Computation Index (CCI): This index measures engagement with more specialized digital applications, such as content creation tools like Photoshop and software development environments.
By combining these indices with demographic data, several important insights emerge:
Urban-Rural Disparities Exist—But the Gaps Are Uneven While rural areas often lag in digital engagement, disparities within urban areas are just as pronounced. Some city neighborhoods have digital activity levels on par with major tech hubs, while others fall significantly behind, revealing a more complex digital divide than previously understood.
Income and Education Are Key Drivers of Digital Engagement Higher-income and higher-education areas show significantly greater engagement in content creation and computational tasks. This suggests that digital skills—not just access—are critical in shaping economic mobility and opportunity. Even in places where broadband availability is the same, digital usage patterns vary widely, demonstrating that access alone is not enough.
Infrastructure Alone Won’t Close the Digital Divide Providing broadband connectivity is essential, but it is not a sufficient solution to the challenges of digital literacy. Our findings show that even in well-connected regions, significant skill gaps persist. This means that policies and interventions must go beyond infrastructure investments to include comprehensive digital education, skills training, and workforce development initiatives…(More)”.
Patients’ Trust in Health Systems to Use Artificial Intelligence
Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.
We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.
Using human mobility data to quantify experienced urban inequalities
Paper by Fengli Xu et al: “The lived experience of urban life is shaped by personal mobility through dynamic relationships and resources, marked not only by access and opportunity, but also inequality and segregation. The recent availability of fine-grained mobility data and context attributes ranging from venue type to demographic mixture offer researchers a deeper understanding of experienced inequalities at scale, and pose many new questions. Here we review emerging uses of urban mobility behaviour data, and propose an analytic framework to represent mobility patterns as a temporal bipartite network between people and places. As this network reconfigures over time, analysts can track experienced inequality along three critical dimensions: social mixing with others from specific demographic backgrounds, access to different types of facilities, and spontaneous adaptation to unexpected events, such as epidemics, conflicts or disasters. This framework traces the dynamic, lived experiences of urban inequality and complements prior work on static inequalities experience at home and work…(More)”.
Conflicts over access to Americans’ personal data emerging across federal government
Article by Caitlin Andrews: “The Trump administration’s fast-moving efforts to limit the size of the U.S. federal bureaucracy, primarily through the recently minted Department of Government Efficiency, are raising privacy and data security concerns among current and former officials across the government, particularly as the administration scales back positions charged with privacy oversight. Efforts to limit the independence of a host of federal agencies through a new executive order — including the independence of the Federal Trade Commission and Securities and Exchange Commission — are also ringing alarm bells among civil society and some legal experts.
According to CNN, several staff within the Office of Personnel Management’s privacy and records keeping department were fired last week. Staff who handle communications and respond to Freedom of Information Act requests were also let go. Though the entire privacy team was not fired, according to the OPM, details about what kind of oversight will remain within the department were limited. The report also states the staff’s termination date is 15 April.
It is one of several moves the Trump administration has made in recent days reshaping how entities access and provide oversight to government agencies’ information.
The New York Times reports on a wide range of incidents within the government where DOGE’s efforts to limit fraudulent government spending by accessing sensitive agency databases have run up against staffers who are concerned about the privacy of Americans’ personal information. In one incident, Social Security Administration acting Commissioner Michelle King was fired after resisting a request from DOGE to access the agency’s database. “The episode at the Social Security Administration … has played out repeatedly across the federal government,” the Times reported…(More)”.
Regulatory Markets: The Future of AI Governance
Paper by Gillian K. Hadfield, and Jack Clark: “Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives…(More)”.