Changing Facebook’s algorithm won’t fix polarization, new study finds


Article by Naomi Nix, Carolyn Y. Johnson, and Cat Zakrzewski: “For years, regulators and activists have worried that social media companies’ algorithms were dividing the United States with politically toxic posts and conspiracies. The concern was so widespread that in 2020, Meta flung open troves of internal data for university academics to study how Facebook and Instagram would affect the upcoming presidential election.

The first results of that research show that the company’s platforms play a critical role in funneling users to partisan information with which they are likely to agree. But the results cast doubt on assumptions that the strategies Meta could use to discourage virality and engagement on its social networks would substantially affect people’s political beliefs.

“Algorithms are extremely influential in terms of what people see on the platform, and in terms of shaping their on-platform experience,” Joshua Tucker, co-director of the Center for Social Media and Politics at New York University and one of the leaders on the research project, said in an interview.

“Despite the fact that we find this big impact in people’s on-platform experience, we find very little impact in changes to people’s attitudes about politics and even people’s self-reported participation around politics.”

The first four studies, which were released on Thursday in the journals Science and Nature, are the result of a unique partnership between university researchers and Meta’s own analysts to study how social media affects political polarization and people’s understanding and opinions about news, government and democracy. The researchers, who relied on Meta for data and the ability to run experiments, analyzed those issues during the run-up to the 2020 election. The studies were peer-reviewed before publication, a standard procedure in science in which papers are sent out to other experts in the field who assess the work’s merit.

As part of the project, researchers altered the feeds of thousands of people using Facebook and Instagram in fall of 2020 to see if that could change political beliefs, knowledge or polarization by exposing them to different information than they might normally have received. The researchers generally concluded that such changes had little impact.

The collaboration, which is expected to be released over a dozen studies, also will examine data collected after the Jan. 6, 2021, attack on the U.S. Capitol, Tucker said…(More)”.

Corporate Responsibility in the Age of AI


Essay by Maria Eitel: “In the past year, a cacophony of conversations about artificial intelligence has erupted. Depending on whom you listen to, AI is either carrying us into a shiny new world of endless possibilities or propelling us toward a grim dystopia. Call them the Barbie and Oppenheimer scenarios – as attention-grabbing and different as the Hollywood blockbusters of the summer. But one conversation is getting far too little attention: the one about corporate responsibility.

I joined Nike as its first Vice President of Corporate Responsibility in 1998, landing right in the middle of the hyper-globalization era’s biggest corporate crisis: the iconic sports and fitness company had become the face of labor exploitation in developing countries. In dealing with that crisis and setting up corporate responsibility for Nike, we learned hard-earned lessons, which can now help guide our efforts to navigate the AI revolution.

There is a key difference today. Taking place in the late 1990s, the Nike drama played out relatively slowly. When it comes to AI, however, we don’t have the luxury of time. This time last year, most people had not heard about generative AI. The technology entered our collective awareness like a lightning strike in late 2022, and we have been trying to make sense of it ever since…

Our collective future now hinges on whether companies – in the privacy of their board rooms, executive meetings, and closed-door strategy sessions – decide to do what is right. Companies need a clear North Star to which they can always refer as they pursue innovation. Google had it right in its early days, when its corporate credo was, “Don’t Be Evil.” No corporation should knowingly harm people in the pursuit of profit.

It will not be enough for companies simply to say that they have hired former regulators and propose possible solutions. Companies must devise credible and effective AI action plans that answer five key questions:

  • What are the potential unanticipated consequences of AI?
  • How are you mitigating each identified risk?
  • What measures can regulators use to monitor companies’ efforts to mitigate potential dangers and hold them accountable?
  • What resources do regulators need to carry out this task?
  • How will we know that the guardrails are working?

The AI challenge needs to be treated like any other corporate sprint. Requiring companies to commit to an action plan in 90 days is reasonable and realistic. No excuses. Missed deadlines should result in painful fines. The plan doesn’t have to be perfect – and it will likely need to be adapted as we continue to learn – but committing to it is essential…(More)”.

The GPTJudge: Justice in a Generative AI World


Paper by Grossman, Maura and Grimm, Paul and Brown, Dan and Xu, Molly: “Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they are capable of producing computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI- generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases.

This article discusses these issues, and offers a comprehensive, yet understandable, explanation of what GenAI is and how it functions. It explores evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials. Importantly, it offers practical, step-by- step recommendations for courts and attorneys to follow in meeting the evidentiary challenges posed by GenAI. Finally, it highlights additional impacts that GenAI evidence may have on the development of substantive IP law, and its potential impact on what the future may hold for litigating cases in a GenAI world…(More)”.

Innovation Can Reboot American Democracy


Blog by Suzette Brooks Masters: “A thriving multiracial pluralist democracy is an aspiration that many people share for America. Far from being inevitable, the path to such a future is uncertain.

To stretch how we think about American democracy’s future iterations and begin to imagine the contours of the new, we need to learn from what’s emergent. So I’m going to take you on a whirlwind tour of some experiments taking place here and abroad that are the bright spots illuminating possible futures ahead.

My comments are informed by a research report I wrote last year called Imagining Better Futures for American Democracy. I interviewed dozens of visionaries in a range of fields and with diverse perspectives about the future of our democracy and the role positive visioning and futures thinking could play in reinvigorating it.

As I discuss these bright spots, I want to emphasize that what is most certain now is the accelerating and destabilizing change we are experiencing. It’s critical therefore to develop systems, institutions, norms and mindsets to navigate that change boldly and responsibly, not pretend that tomorrow will continue to look like today.

Yet when paradigms shift, as they inevitably do and I would argue are right now, that’s a messy and confusing time that can cause lots of anxiety and disorientation. During these critical periods of transition, we must set aside or ‘hospice” some assumptions, mindsets, practices, and institutions, while midwifing, or welcoming in, new ones.

This is difficult to do in the best of times but can be especially so when, collectively, we suffer from a lack of imagination and vision about what American democracy could and should become.

It’s not all our fault — inertia, fear, distrust, cynicism, diagnosis paralysis, polarization, exceptionalism, parochialism, and a pervasive, dystopian media environment are dragging us down. They create very strong headwinds weakening both our appetite and our ability to dream bigger and imagine better futures ahead.

However, focusing on and amplifying promising innovations can change that dysfunctional dynamic by inspiring us and providing blueprints to act upon when the time is right.

Below I discuss two main types of innovations in the political sphere: election-related structural reforms and governance reforms, including new forms of civic engagement and government decision-making…(More)”.

What types of health evidence persuade actors in a complex policy system?


Article by Geoff Bates, Sarah Ayres, Andrew Barnfield, and Charles Larkin: “Good quality urban environments can help to prevent non-communicable diseases such as cardiovascular diseases, mental health conditions and diabetes that account for three quarters of deaths globally (World Health Organisation, 2022). More commonly however, poor quality living conditions contribute to poor health and widening inequalities (Adlakha & John, 2022). Consequently, many public health advocates hope to convince and bring together the stakeholders who shape urban development to help create healthier places.

Evidence is one tool that can be used to convince these stakeholders from outside the health sector to think more about health outcomes. Most of the literature on the use of evidence in policy environments has focused on the public sector, such as politicians and civil servants (e.g., Crow & Jones, 2018). However, urban development decision-making processes involve many stakeholders across sectors with different needs and agendas (Black et al., 2021). While government sets policy and regulatory frameworks, private sector organisations such as property developers and investors drive urban development and strongly influence policy agendas.

In our article recently published in Policy & PoliticsWhat types of evidence persuade actors in a complex policy system?, we explore the use of evidence to influence different groups across the urban development system to think more about health outcomes in their decisions…

The key findings of the research were that:

  1. Evidence-based narratives have wide appeal. Narratives based on real-world and lived experiences help stakeholders to form an emotional connection with evidence and are effective for drawing attention to health problems. Powerful outcomes such as child health and mortality data are particularly persuasive. This builds on literature promoting the use of storytelling approaches for public sector actors by demonstrating its applicability within the private and third sectors….(More)”

Design in the Civic Space: Generating Impact in City Government


Paper by Stephanie Wade and Jon Freach: “When design in the private sector is used as a catalyst for innovation, it can produce insight into human experience, awareness of equitable and inequitable conditions, and clarity about needs and wants. But when we think of applying design in a government complex, the complicated nature of the civic arena means that public servants need to learn and apply design in ways that are specific to the intricate and expansive ecosystem of long-standing social challenges they face, and learn new mindsets, methods, and ways of working that challenge established practices in a bureaucratic environment. Design offers tools to help navigate the ambiguous boundaries of these complex problems and improve the city’s organizational culture so that it delivers better services to residents and the communities in which they live.

For the new practitioner in government, design can seem exciting, inspiring, hopeful, and fun because over the past decade it has quickly become a popular and novel way to approach city policy and service design. In the early part of the learning process, people often report that using design helps visualize their thoughts, spark meaningful dialogue, and find connections between problems, data, and ideas. But for some, when the going gets tough—when the ambiguity of overlapping and long-standing complex civic problems, a large number of stakeholders, causes, and effects begin to surface—design practices can seem slow and confusing.

In this article we explore the growth and impact of using design in city government and best practices when introducing it into city hall to tackle complex civic sector challenges along with the highs and lows of using design in local government to help cities innovate. The authors, who have worked together to conceive, create, and deliver design training to over 100 global cities, the US federal government, and higher education, share examples from their fieldwork supported by the experiences of city staff members who have applied design methods in their jobs….(More)”.

De Gruyter Handbook of Citizens’ Assemblies


Book edited by Min Reuchamps, Julien Vrydagh and Yanina Welp: “Citizens’ Assemblies (CAs) are flourishing around the world. Quite often composed of randomly selected citizens, CAs, arguably, come as a possible answer to contemporary democratic challenges. Democracies worldwide are indeed confronted with a series of disruptive phenomena such as a widespread perception of distrust and growing polarization as well as low performance. Many actors seek to reinvigorate democracy with citizen participation and deliberation. CAs are expected to have the potential to meet this twofold objective. But, despite deliberative and inclusive qualities of CAs, many questions remain open. The increasing popularity of CAs call for a holistic reflection and evaluation on their origins, current uses and future directions.

The De Gruyter Handbook of Citizens’ Assemblies showcases the state of the art around the study of CAs and opens novel perspectives informed by multidisciplinary research and renewed thinking about deliberative participatory processes. It discusses the latest theoretical, empirical, and methodological scientific developments on CAs and offers a unique resource for scholars, decision-makers, practitioners, and curious citizens to better understand the qualities, purposes, promises but also pitfalls of CAs…(More)”.

Attacks on Tax Privacy: How the Tax Prep Industry Enabled Meta to Harvest Millions of Taxpayers’ Sensitive Data


Congressional Report: “The investigation revealed that:

  • Tax preparation companies shared millions of taxpayers’ data with Meta, Google, and other Big Tech firms: The tax prep companies used computer code – known as pixels – to send data to Meta and Google. While most websites use pixels, it is particularly reckless for online tax preparation websites to use them on webpages where tax return information is entered unless further steps are taken to ensure that the pixels do not access sensitive information. TaxAct, TaxSlayer, and H&R Block confirmed that they had used the Meta Pixel, and had been using it “for at least a couple of years” and all three companies had been using Google Analytics (GA) for even longer.
  • Tax prep companies shared extraordinarily sensitive personal and financial information with Meta, which used the data for diverse advertising purposes: TaxAct, H&R Block, and TaxSlayer each revealed, in response to this Congressional inquiry, that they shared taxpayer data via their use of the Meta Pixel and Google’s tools. Although the tax prep companies and Big Tech firms claimed that all shared data was anonymous, the FTC and experts have indicated that the data could easily be used to identify individuals, or to create a dossier on them that could be used for targeted advertising or other purposes. 
  • Tax prep companies and Big Tech firms were reckless about their data sharing practices and their treatment of sensitive taxpayer data: The tax prep companies indicated that they installed the Meta and Google tools on their websites without fully understanding the extent to which they would send taxpayer data to these tech firms, without consulting with independent compliance or privacy experts, and without full knowledge of Meta’s use of and disposition of the data. 
  • Tax prep companies may have violated taxpayer privacy laws by sharing taxpayer data with Big Tech firms: Under the law, “a tax return preparer may not disclose or use a taxpayer’s tax return information prior to obtaining a written consent from the taxpayer,” – and they failed to do so when it came to the information that was turned over to Meta and Google. Tax prep companies can also turn over data to “auxiliary service providers in connection with the preparation of a tax return.” But Meta and Google likely do not meet the definition of “auxiliary service providers” and the data sharing with Meta was for advertising purposes – not “in connection with the preparation of a tax return.”…(More)”.

Asymmetries: participatory democracy after AI


Article by Gianluca Sgueo in Grand Continent (FR): “When it comes to AI, the scientific community expresses divergent opinions. Some argue that it could enable democratic governments to develop more effective and possibly more inclusive policies. Policymakers who use AI to analyse and process large volumes of digital data would be in a good position to make decisions that are closer to the needs and expectations of communities of citizens. In the view of those who view ‘government by algorithms’ favourably, AI creates the conditions for more effective and regular democratic interaction between public actors and civil society players. Other authors, on the other hand, emphasise the many critical issues raised by the ‘implantation’ of such a complex technology in political and social systems that are already highly complex and problematic. Some authors believe that AI could undermine even democratic values, by perpetuating and amplifying social inequalities and distrust in democratic institutions – thus weakening the foundations of the social contract. But if everyone is right, is no one right? Not necessarily. These two opposing conceptions give us food for thought about the relationship between algorithms and democracies…(More)”.

Government at a Glance


OECD Report: “Published every two years, Government at a Glance provides reliable, internationally comparable indicators on government activities and their results in OECD countries. Where possible, it also reports data for selected non-member countries. It includes input, process, output and outcome indicators as well as contextual information for each country.

Each indicator in the publication is presented in a user-friendly format, consisting of graphs and/or charts illustrating variations across countries and over time, brief descriptive analyses highlighting the major findings conveyed by the data, and a methodological section on the definition of the indicator and any limitations in data comparability…(More)”.