Riders in the smog


Article by Zuha Siddiqui, Samriddhi Sakuna and Faisal Mahmud: “…To better understand air quality exposure among gig workers in South Asia, Rest of World gave three gig workers — one each in Lahore, New Delhi, and Dhaka — air quality monitors to wear throughout a regular shift in January. The Atmotube Pro monitors continually tracked their exposure to carcinogenic pollutants — specifically PM1, PM2.5, and PM10 (different sizes of particulate matter), and volatile organic compounds such as benzene and formaldehyde.

The data revealed that all three workers were routinely exposed to hazardous levels of pollutants. For PM2.5, referring to particulates that are 2.5 micrometers in diameter or less — which have been linked to health risks including heart attacks and strokes — all riders were consistently logging exposure levels more than 10 times the World Health Organization’s recommended daily average of 15 micrograms per cubic meter. Manu Sharma, in New Delhi, recorded the highest PM2.5 level of the three riders, hitting 468.3 micrograms per cubic meter around 6 p.m. Lahore was a close second, with Iqbal recording 464.2 micrograms per cubic meter around the same time.

Alongside tracking specific pollutants, the Atmotube Pro gives an overall real-time air quality score (AQS) from 0–100, with zero being the most severely polluted, and 100 being the cleanest. According to Atmo, the company that makes the Atmotube monitors, a reading of 0–20 should be considered a health alert, under which conditions “everyone should avoid all outdoor exertion.” But the three gig workers found their monitors consistently displayed the lowest possible score…(More)”.

The New Fire: War, Peace, and Democracy in the Age of AI


Book by Ben Buchanan and Andrew Imbrie: “Artificial intelligence is revolutionizing the modern world. It is ubiquitous—in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. As AI policy experts Ben Buchanan and Andrew Imbrie show in The New Fire, few choices are more urgent—or more fascinating—than how we harness this technology and for what purpose.

The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny…(More)”.

Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging


Article by Peter Swire and Samm Sacks: “A new executive order issued today contains multiple provisions, most notably limiting bulk sales of personal data to “countries of concern.” The order has admirable national security goals but quite possibly would be ineffective and may be counterproductive. There are serious questions about both the substance and the messaging of the order. 

The new order combines two attractive targets for policy action. First, in this era of bipartisan concern about China, the new order would regulate transactions specifically with “countries of concern,” notably China, but also others such as Iran and North Korea. A key rationale for the order is to prevent China from amassing sensitive information about Americans, for use in tracking and potentially manipulating military personnel, government officials, or anyone else of interest to the Chinese regime. 

Second, the order targets bulk sales, to countries of concern, of sensitive personal information by data brokers, such as genomic, biometric, and precise geolocation data. The large and growing data broker industry has come under well-deserved bipartisan scrutiny for privacy risks. Congress has held hearings and considered bills to regulate such brokers. California has created a data broker registry and last fall passed the Delete Act to enable individuals to require deletion of their personal data. In January, the Federal Trade Commission issued an order prohibiting data broker Outlogic from sharing or selling sensitive geolocation data, finding that the company had acted without customer consent, in an unfair and deceptive manner. In light of these bipartisan concerns, a new order targeting both China and data brokers has a nearly irresistible political logic.

Accurate assessment of the new order, however, requires an understanding of this order as part of a much bigger departure from the traditional U.S. support for free and open flows of data across borders. Recently, in part for national security reasons, the U.S. has withdrawn its traditional support in the World Trade Organization (WTO) for free and open data flows, and the Department of Commerce has announced a proposed rule, in the name of national security, that would regulate U.S.-based cloud providers when selling to foreign countries, including for purposes of training artificial intelligence (AI) models. We are concerned that these initiatives may not sufficiently account for the national security advantages of the long-standing U.S. position and may have negative effects on the U.S. economy.

Despite the attractiveness of the regulatory targets—data brokers and countries of concern—U.S. policymakers should be cautious as they implement this order and the other current policy changes. As discussed below, there are some possible privacy advances as data brokers have to become more careful in their sales of data, but a better path would be to ensure broader privacy and cybersecurity safeguards to better protect data and critical infrastructure systems from sophisticated cyberattacks from China and elsewhere…(More)”.

New Horizons


An Introduction to the 2nd Edition of the State of Open Data by Renata Avila and Tim Davies: “The struggle to deliver on the vision that data, this critical resource of modern societies, should be widely available, well structured, and shared for all to use, has been a long one. It has been a struggle involving thousands upon thousands of individuals, organisations, and communities. Without their efforts, public procurement would be opaque, smart-cities even more corporate controlled, transport systems less integrated, and pandemic responses less rapid. Across numerous initiatives, open data has become more embedded as a way to support accountability, enable collaboration, and to better unlock the value of data. 

However, much like the climber reaching the top of the foothills, and for the first time seeing the hard climb of the whole mountain coming into view, open data advocates, architects, and community builders have not reached the end of their journey. As we move into the middle of the 2020s, action on open data faces new and significant challenges if we are to see a future in which open and enabling data infrastructures and ecosystems are the norm rather than a sparse patchwork of exceptions. Building open infrastructures to power social change for the next century is no small task, and to meet the challenges ahead, we will need all that the lessons we can gather from more than 15 years of open data action to date…Across the collection, we can find two main pathways to broader participation explored. On the one hand are discussions of widening public engagement and data literacy, creating a more diverse constituency of people interested and able to engage with data projects in a voluntary capacity. On the other, are calls for more formalisation of data governance, embedding citizen voices within increasingly structured data collaborations and ensuring that affected stakeholders are consulted on, or given a role in, key data decisions. Mariel García-Montes (Data Literacy) underscores the case for an equity-first approach to the first pathway, highlighting how generalist data literacy can be used for or against the public good, and calling for approaches to data literacy building that centre on an understanding of inequality and power. In writing on urban development, Stefaan G. Verhulst and Sampriti Saxena (Urban Development) point to a number of examples of the latter approach in which cities are experimenting with various forms of deliberative conversations and processes…(More)”.

AI-Powered Urban Innovations Bring Promise, Risk to Future Cities


Article by Anthony Townsend and Hubert Beroche: “Red lights are obsolete. That seems to be the thinking behind Google’s latest fix for cities, which rolled out late last year in a dozen cities around the world, from Seattle to Jakarta. Most cities still collect the data to determine the timing of traffic signals by hand. But Project Green Light replaced clickers and clipboards with mountains of location data culled from smartphones. Artificial intelligence crunched the numbers, adjusting the signal pattern to smooth the flow of traffic. Motorists saw 30% fewer delays. There’s just one catch. Even as pedestrian deaths in the US reached a 40-year high in 2022, Google engineers omitted pedestrians and cyclists from their calculations.

Google’s oversight threatens to undo a decade of progress on safe streets and is a timely reminder of the risks in store when AI invades the city. Mayors across global cities have embraced Vision Zero pledges to eliminate pedestrian deaths. They are trying to slow traffic down, not speed it up. But Project Green Light’s website doesn’t even mention road safety. Still, the search giant’s experiment demonstrates AI’s potential to help cities. Tailpipe greenhouse gas emissions at intersections fell by 10%. Imagine what AI could do if we used it to empower people in cities rather than ignore them.

Take the technocratic task of urban planning and the many barriers to participation it creates. The same technology that powers chatbots and deepfakes is rapidly bringing down those barriers. Real estate developers have mastered the art of using glossy renderings to shape public opinion. But UrbanistAI, a tool developed by Helsinki-based startup SPIN Unit and the Milanese software company Toretei, puts that power in the hands of residents: It uses generative AI to transform text prompts into photorealistic images of alternative designs for controversial projects. Another startup, the Barcelona-based Aino, wraps a chatbot around a mapping tool. Using such computer aids, neighborhood activists no longer need to hire a data scientist to produce maps from census data to make their case…(More)”.

Artificial Intelligence: A Threat to Climate Change, Energy Usage and Disinformation


Press Release: “Today, partners in the Climate Action Against Disinformation coalition released a report that maps the risks that artificial intelligence poses to the climate crisis.

Topline points:

  • AI systems require an enormous amount of energy and water, and consumption is expanding quickly. Estimates suggest a doubling in 5-10 years.
  • Generative AI has the potential to turbocharge climate disinformation, including climate change-related deepfakes, ahead of a historic election year where climate policy will be central to the debate. 
  • The current AI policy landscape reveals a concerning lack of regulation on the federal level, with minor progress made at the state level, relying on voluntary, opaque and unenforceable pledges to pause development, or provide safety with its products…(More)”.

The Judicial Data Collaborative


About: “We enable collaborations between researchers, technical experts, practitioners and organisations to create a shared vocabulary, standards and protocols for open judicial data sets, shared infrastructure and resources to host and explain available judicial data.

The objective is to drive and sustain advocacy on the quality and limitations of Indian judicial data and engage the judicial data community to enable cross-learning among various projects…

Accessibility and understanding of judicial data are essential to making courts and tribunals more transparent, accountable and easy to navigate for litigants. In recent years, eCourts services and various Court and tribunals’ websites have made a large volume of data about cases available. This has expanded the window into judicial functioning and enabled more empirical research on the role of courts in the protection of citizen’s rights. Such research can also assist busy courts understand patterns of litigation and practice and can help engage across disciplines with stakeholders to improve functioning of courts.

Some pioneering initiatives in the judicial data landscape include research such as DAKSH’s database; annual India Justice Reports; and studies of court functioning during the pandemic and quality of eCourts data; open datasets including Development Data Lab’s Judicial Data Portal containing District & Taluka court cases (2010-2018) and platforms that collect them such as Justice Hub; and interactive databases such as the Vidhi JALDI Constitution Bench Pendency Project…(More)”.

A World Divided Over Artificial Intelligence


Article by Aziz Huq: “…Through multinational communiqués and bilateral talks, an international framework for regulating AI does seem to be coalescing. Take a close look at U.S. President Joe Biden’s October 2023 executive order on AI; the EU’s AI Act, which passed the European Parliament in December 2023 and will likely be finalized later this year; or China’s slate of recent regulations on the topic, and a surprising degree of convergence appears. They have much in common. These regimes broadly share the common goal of preventing AI’s misuse without restraining innovation in the process. Optimists have floated proposals for closer international management of AI, such as the ideas presented in Foreign Affairs by the geopolitical analyst Ian Bremmer and the entrepreneur Mustafa Suleyman and the plan offered by Suleyman and Eric Schmidt, the former CEO of Google, in the Financial Times in which they called for the creation of an international panel akin to the UN’s Intergovernmental Panel on Climate Change to “inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming.”

But these ambitious plans to forge a new global governance regime for AI may collide with an unfortunate obstacle: cold reality. The great powers, namely, China, the United States, and the EU, may insist publicly that they want to cooperate on regulating AI, but their actions point toward a future of fragmentation and competition. Divergent legal regimes are emerging that will frustrate any cooperation when it comes to access to semiconductors, the setting of technical standards, and the regulation of data and algorithms. This path doesn’t lead to a coherent, contiguous global space for uniform AI-related rules but to a divided landscape of warring regulatory blocs—a world in which the lofty idea that AI can be harnessed for the common good is dashed on the rocks of geopolitical tensions…(More)”.

The Limits of Data


Essay by C.Thi Nguyen: “…Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes.  They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.

Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”

Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?…(More)”.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies


Article by Kashmir Hill: “Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.

So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.

LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.

What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.

On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.

“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”..(More)”.