Green Light


Google Research: “Road transportation is responsible for a significant amount of global and urban greenhouse gas emissions. It is especially problematic at city intersections where pollution can be 29 times higher than on open roads.  At intersections, half of these emissions come from traffic accelerating after stopping. While some amount of stop-and-go traffic is unavoidable, part of it is preventable through the optimization of traffic light timing configurations. To improve traffic light timing, cities need to either install costly hardware or run manual vehicle counts; both of these solutions are expensive and don’t provide all the necessary information. 

Green Light uses AI and Google Maps driving trends, with one of the strongest understandings of global road networks, to model traffic patterns and build intelligent recommendations for city traffic engineers to optimize traffic flow. Early numbers indicate a potential for up to 30% reduction in stops and 10% reduction in greenhouse gas emissions (1). By optimizing each intersection, and coordinating between adjacent intersections, we can create waves of green lights and help cities further reduce stop-and-go traffic. Green Light is now live in 70 intersections in 12 cities, 4 continents, from Haifa, Israel to Bangalore, India to Hamburg, Germany – and in these intersections we are able to save fuel and lower emissions for up to 30M car rides monthly. Green Light reflects Google Research’s commitment to use AI to address climate change and improve millions of lives in cities around the world…(More)”

Brazil hires OpenAI to cut costs of court battles


Article by Marcela Ayres and Bernardo Caram: “Brazil’s government is hiring OpenAI to expedite the screening and analysis of thousands of lawsuits using artificial intelligence (AI), trying to avoid costly court losses that have weighed on the federal budget.

The AI service will flag to government the need to act on lawsuits before final decisions, mapping trends and potential action areas for the solicitor general’s office (AGU).

AGU told Reuters that Microsoft would provide the artificial intelligence services from ChatGPT creator OpenAI through its Azure cloud-computing platform. It did not say how much Brazil will pay for the services.

Court-ordered debt payments have consumed a growing share of Brazil’s federal budget. The government estimated it would spend 70.7 billion reais ($13.2 billion) next year on judicial decisions where it can no longer appeal. The figure does not include small-value claims, which historically amount to around 30 billion reais annually.

The combined amount of over 100 billion reais represents a sharp increase from 37.3 billion reais in 2015. It is equivalent to about 1% of gross domestic product, or 15% more than the government expects to spend on unemployment insurance and wage bonuses to low-income workers next year.

AGU did not provide a reason for Brazil’s rising court costs…(More)”.

Using ChatGPT to Facilitate Truly Informed Medical Consent


Paper by Fatima N. Mirza: “Informed consent is integral to the practice of medicine. Most informed consent documents are written at a reading level that surpasses the reading comprehension level of the average American. Large language models, a type of artificial intelligence (AI) with the ability to summarize and revise content, present a novel opportunity to make the language used in consent forms more accessible to the average American and thus, improve the quality of informed consent. In this study, we present the experience of the largest health care system in the state of Rhode Island in implementing AI to improve the readability of informed consent documents, highlighting one tangible application for emerging AI in the clinical setting…(More)”.

Artificial Intelligence Applications for Social Science Research


Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”

The Deliberative Turn in Democratic Theory


Book by Antonino Palumbo: “Thirty years of developments in deliberative democracy (DD) have consolidated this subfield of democratic theory. The acquired disciplinary prestige has made theorist and practitioners very confident about the ability of DD to address the legitimacy crisis experienced by liberal democracies at present at both theoretical and practical levels. The book advance a critical analysis of these developments that casts doubts on those certainties — current theoretical debates are reproposing old methodological divisions, and are afraid to move beyond the minimalist model of democracy advocated by liberal thinkers; democratic experimentation at the micro-level seems to have no impact at the macro-level, and remain sets of isolated experiences. The book indicates that those defects are mainly due to the liberal minimalist frame of reference within which reflection in democratic theory and practice takes place. Consequently, it suggests to move beyond liberal understandings of democracy as a game in need of external rules, and adopt instead a vision of democracy as a self-correcting metagame…(More)”.

The tensions of data sharing for human rights: A modern slavery case study


Paper by Jamie Hancock et al: “There are calls for greater data sharing to address human rights issues. Advocates claim this will provide an evidence-base to increase transparency, improve accountability, enhance decision-making, identify abuses, and offer remedies for rights violations. However, these well-intentioned efforts have been found to sometimes enable harms against the people they seek to protect. This paper shows issues relating to fairness, accountability, or transparency (FAccT) in and around data sharing can produce such ‘ironic’ consequences. It does so using an empirical case study: efforts to tackle modern slavery and human trafficking in the UK. We draw on a qualitative analysis of expert interviews, workshops, ecosystem mapping exercises, and a desk-based review. The findings show how, in the UK, a large ecosystem of data providers, hubs, and users emerged to process and exchange data from across the country. We identify how issues including legal uncertainties, non-transparent sharing procedures, and limited accountability regarding downstream uses of data may undermine efforts to tackle modern slavery and place victims of abuses at risk of further harms. Our findings help explain why data sharing activities can have negative consequences for human rights, even within human rights initiatives. Moreover, our analysis offers a window into how FAccT principles for technology relate to the human rights implications of data sharing. Finally, we discuss why these tensions may be echoed in other areas where data sharing is pursued for human rights concerns, identifying common features which may lead to similar results, especially where sensitive data is shared to achieve social goods or policy objectives…(More)”.

The revolution shall not be automated: On the political possibilities of activism through data & AI


Article by Isadora Cruxên: “Every other day now, there are headlines about some kind of artificial intelligence (AI) revolution that is taking place. If you read the news or check social media regularly, you have probably come across these too: flashy pieces either trumpeting or warning against AI’s transformative potential. Some headlines promise that AI will fundamentally change how we work and learn or help us tackle critical challenges such as biodiversity conservation and climate change. Others question its intelligence, point to its embedded biases, and draw attention to its extractive labour record and high environmental costs.

Scrolling through these headlines, it is easy to feel like the ‘AI revolution’ is happening to us — or perhaps blowing past us at speed — while we are enticed to take the backseat and let AI-powered chat-boxes like ChatGPT do the work. But the reality is that we need to take the driver’s seat.

If we want to leverage this technology to advance social justice and confront the intersecting socio-ecological challenges before us, we need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently. As Mimi Ọnụọha and Mother Cyborg put it in A People’s Guide to AI, “the path to a fair future starts with the humans behind the machines, not the machines themselves.”

Sure, this might seem easier said than done. Most AI research and development is being driven by big tech corporations and start-ups. As Lauren Klein and Catherine D’Ignazio discuss in “Data Feminism for AI” (see “Further reading” at the end for all works cited), the results are models, tools, and platforms that are opaque to users, and that cater to the tech ambitions and profit motives of private actors, with broader societal needs and concerns becoming afterthoughts. There is excellent critical work that explores the extractive practices and unequal power relations that underpin AI production, including its relationship to processes of dataficationcolonial data epistemologies, and surveillance capitalism (to link but a few). Interrogating, illuminating, and challenging these dynamics is paramount if we are to take the driver’s seat and find alternative paths…(More)”.

Blueprints for Learning


Report by the Data Foundation: “The Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act) required the creation of learning agendas for the largest federal agencies. These agendas outline how agencies will identify and answer priority questions through data and evidence-building activities. The Data Foundation undertook an analysis of the agendas to understand how they were developed and plans for implementation as part of the 5-Year milestone of the Evidence Act.

The analysis reveals both progress and areas for improvement in the development and use of learning agendas. All but one large agency produced a publicly-available learning agenda, demonstrating a significant initial effort. However, several challenges were identified:

  • Limited detail on execution and use: Many learning agendas lacked specifics on how the identified priority questions would be addressed or how the evidence generated would be used.
  • Variation in quality: Agencies diverged in the comprehensiveness and clarity of their agendas, with some providing more detailed plans than others.
  • Resource constraints: The analysis suggests that a lack of dedicated resources may be hindering some agencies’ capacity to fully implement their learning agendas…(More)”.

Societal interaction plans—A tool for enhancing societal engagement of strategic research in Finland


Paper by Kirsi Pulkkinen, Timo Aarrevaara, Mikko Rask, and Markku Mattila: “…we investigate the practices and capacities that define successful societal interaction of research groups with stakeholders in mutually beneficial processes. We studied the Finnish Strategic Research Council’s (SRC) first funded projects through a dynamic governance lens. The aim of the paper is to explore how the societal interaction was designed and commenced at the onset of the projects in order to understand the logic through which the consortia expected broad impacts to occur. The Finnish SRC introduced a societal interaction plan (SIP) approach, which requires research consortia to consider societal interaction alongside research activities in a way that exceeds conventional research plans. Hence, the first SRC projects’ SIPs and the implemented activities and working logics discussed in the interviews provide a window into exploring how active societal interaction reflects the call for dynamic, sustainable practices and new capabilities to better link research to societal development. We found that the capacities of dynamic governance were implemented by integrating societal interaction into research, in particular through a ‘drizzling’ approach. In these emerging practices SIP designs function as platforms for the formation of communities of experts, rather than traditional project management models or mere communication tools. The research groups utilized the benefits of pooling academic knowledge and skills with other types of expertise for mutual gain. They embraced the limits of expertise and reached out to societal partners to truly broker knowledge, and exchange and develop capacities and perspectives to solve grand societal challenges…(More)”.

Why the future of democracy could depend on your group chats


Article by Nathan Schneider: “I became newly worried about the state of democracy when, a few years ago, my mother was elected president of her neighborhood garden club.

Her election wasn’t my worry – far from it. At the time, I was trying to resolve a conflict on a large email group I had created. Someone, inevitably, was being a jerk on the internet. I had the power to remove them, but did I have the right? I realized that the garden club had in its bylaws something I had never seen in nearly all the online communities I had been part of: basic procedures to hold people with power accountable to everyone else.

The internet has yet to catch up to my mother’s garden club.

When Alexis de Tocqueville toured the United States in the early 1830s, he made an observation that social scientists have seen over and over since: Democracy at the state and national levels depends on everyday organizations like that garden club. He called them “schools” for practicing the “general theory of association.” As members of small democracies, people were learning to be citizens of a democratic country.

How many people experience those kinds of schools today?

People interact online more than offline nowadays. Rather than practicing democracy, people most likely find themselves getting suspended from a Facebook group they rely on with no reason given or option to appeal. Or a group of friends join a chat together, but only one of them has the ability to change its settings. Or people see posts from Elon Musk inserted into their mentions on X, which he owns. All of these situations are examples of what I call “implicit feudalism.”…(More)”.