Yes, No, Maybe? Legal & Ethical Considerations for Informed Consent in Data Sharing and Integration


Report by Deja Kemp, Amy Hawn Nelson, & Della Jenkins: “Data sharing and integration are increasingly commonplace at every level of government, as cross-program and cross-sector data provide valuable insights to inform resource allocation, guide program implementation, and evaluate policies. Data sharing, while routine, is not without risks, and clear legal frameworks for data sharing are essential to mitigate those risks, protect privacy, and guide responsible data use. In some cases, federal privacy laws offer clear consent requirements and outline explicit exceptions where consent is not required to share data. In other cases, the law is unclear or silent regarding whether consent is needed for data sharing. Importantly, consent can present both ethical and logistical challenges, particularly when integrating cross-sector data. This brief will frame out key concepts related to consent; explore major federal laws governing the sharing of administrative data, including individually identifiable information; and examine important ethical implications of consent, particularly in cases when the law is silent or unclear. Finally, this brief will outline the foundational role of strong governance and consent frameworks in ensuring ethical data use and offer technical alternatives to consent that may be appropriate for certain data uses….(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.

A Hiring Law Blazes a Path for A.I. Regulation


Article by Steve Lohr: “European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.

The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?

“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.

But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.

The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”…(More)” – See also AI Localism: Governing AI at the Local Level

Boston Isn’t Afraid of Generative AI


Article by Beth Simone Noveck: “After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good…(More)”.

Can Mobility of Care Be Identified From Transit Fare Card Data? A Case Study In Washington D.C.


Paper by Daniela Shuman, et al: “Studies in the literature have found significant differences in travel behavior by gender on public transit that are largely attributable to household and care responsibilities falling disproportionately on women. While the majority of studies have relied on survey and qualitative data to assess “mobility of care”, we propose a novel data-driven workflow utilizing transit fare card transactions, name-based gender inference, and geospatial analysis to identify mobility of care trip making. We find that the share of women travelers trip-chaining in the direct vicinity of mobility of care places of interest is 10% – 15% higher than men….(More)”.

How a small news site built an innovative data project to visualise the impact of climate change on Uruguay’s capital


Interview by Marina Adami: “La ciudad sumergida (The submerged city), an investigation produced by Uruguayan science and technology news site Amenaza Roboto, is one of the winners of this year’s Sigma Awards for data journalism. The project uses maps of the country’s capital, Montevideo, to create impressive visualisations of the impact sea level rises are predicted to have on the city and its infrastructure. The project is a first of its kind for Uruguay, a small South American country in which data journalism is still a novelty. It is also a good example of a way news outlets can investigate and communicate the disastrous effects of climate change in local communities. 

I spoke to Miguel Dobrich, a journalist, educator and digital entrepreneur who worked on the project together with colleagues Gabriel FaríasNatalie Aubet and Nahuel Lamas, to find out what lessons other outlets can take from this project and from Amenaza Roboto’s experiments with analysing public data, collaborating with scientists, and keeping the focus on their communities….(More)”

Big data proves mobility is not gender-neutral


Blog by Ellin Ivarsson, Aiga Stokenberg and Juan Ignacio Fulponi: “All over the world, there is growing evidence showing that women and men travel differently. While there are many reasons behind this, one key factor is the persistence of traditional gender norms and roles that translate into different household responsibilities, different work schedules, and, ultimately, different mobility needs. Greater overall risk aversion and sensitivity to safety issues also play an important role in how women get around. Yet gender often remains an afterthought in the transport sector, meaning most policies or infrastructure investment plans are not designed to take into account the specific mobility needs of women.

The good news is that big data can help change that. In a recent study, the World Bank Transport team combined several data sources to analyze how women travel around the Buenos Aires Metropolitan Area (AMBA), including mobile phone signal data, congestion data from Waze, public transport smart card data, and data from a survey implemented by the team in early 2022 with over 20,300 car and motorcycle users.

Our research revealed that, on average, women in AMBA travel less often than men, travel shorter distances, and tend to engage in more complex trips with multiple stops and purposes. On average, 65 percent of the trips made by women are shorter than 5 kilometers, compared to 60 percent among men. Also, women’s hourly travel patterns are different, with 10 percent more trips than men during the mid-day off-peak hour, mostly originating in central AMBA. This reflects the larger burden of household responsibilities faced by women – such as picking children up from school – and the fact that women tend to work more irregular hours…(More)” See also Gender gaps in urban mobility.

Judging Nudging: Understanding the Welfare Effects of Nudges Versus Taxes


Paper by John A. List, Matthias Rodemeier, Sutanuka Roy & Gregory K. Sun: “While behavioral non-price interventions (“nudges”) have grown from academic curiosity to a bona fide policy tool, their relative economic efficiency remains under-researched. We develop a unified framework to estimate welfare effects of both nudges and taxes. We showcase our approach by creating a database of more than 300 carefully hand-coded point estimates of non-price and price interventions in the markets for cigarettes, influenza vaccinations, and household energy. While nudges are effective in changing behavior in all three markets, they are not necessarily the most efficient policy. We find that nudges are more efficient in the market for cigarettes, while taxes are more efficient in the energy market. For influenza vaccinations, optimal subsidies likely outperform nudges. Importantly, two key factors govern the difference in results across markets: i) an elasticity-weighted standard deviation of the behavioral bias, and ii) the magnitude of the average externality. Nudges dominate taxes whenever i) exceeds ii). Combining nudges and taxes does not always provide quantitatively significant improvements to implementing one policy tool alone…(More)”.

As the Quantity of Data Explodes, Quality Matters


Article by Katherine Barrett and Richard Greene: “With advances in technology, governments across the world are increasingly using data to help inform their decision making. This has been one of the most important byproducts of the use of open data, which is “a philosophy- and increasingly a set of policies – that promotes transparency, accountability and value creation by making government data available to all,” according to the Organisation for Economic Co-operation and Development (OECD).

But as data has become ever more important to governments, the quality of that data has become an increasingly serious issue. A number of nations, including the United States, are taking steps to deal with it. For example, according to a study from Deloitte, “The Dutch government is raising the bar to enable better data quality and governance across the public sector.” In the same report, a case study about Finland states that “data needs to be shared at the right time and in the right way. It is also important to improve the quality and usability of government data to achieve the right goals.” And the United Kingdom has developed its Government Data Quality Hub to help public sector organizations “better identify their data challenges and opportunities and effectively plan targeted improvements.”

Our personal experience is with U.S. states and local governments, and in that arena the road toward higher quality data is a long and difficult one, particularly as the sheer quantity of data has grown exponentially. As things stand, based on our ongoing research into performance audits, it is clear that issues with data are impediments to the smooth process of state and local governments…(More)”.

Digital Equity 2.0: How to Close the Data Divide


Report by Gillian Diebold: “For the last decade, closing the digital divide, or the gap between those subscribing to broadband and those not subscribing, has been a top priority for policymakers. But high-speed Internet and computing device access are no longer the only barriers to fully participating and benefiting from the digital economy. Data is also increasingly essential, including in health care, financial services, and education. Like the digital divide, a gap has emerged between the data haves and the data have-nots, and this gap has introduced a new set of inequities: the data divide.

Policymakers have put a great deal of effort into closing the digital divide, and there is now near-universal acceptance of the notion that obtaining widespread Internet access generates social and economic benefits. But closing the data divide has received little attention. Moreover, efforts to improve data collection are typically overshadowed by privacy advocates’ warnings against collecting any data. In fact, unlike the digital divide, many ignore the data divide or argue that the way to close it is to collect vastly less data.1 But without substantial efforts to increase data representation and access, certain individuals and communities will be left behind in an increasingly data-driven world.

This report describes the multipronged efforts needed to address digital inequity. For the digital divide, policymakers have expanded digital connectivity, increased digital literacy, and improved access to digital devices. For the data divide, policymakers should similarly take a holistic approach, including by balancing privacy and data innovation, increasing data collection efforts across a wide array of fronts, enhancing access to data, improving data quality, and improving data analytics efforts. Applying lessons from the digital divide to this new challenge will help policymakers design effective and efficient policy and create a more equitable and effective data economy for all Americans…(More)”.