Automating public services


Report by Anna Dent: “…Public bodies, under financial stress and looking for effective solutions, are at risk of jumping on the automation bandwagon without critically assessing whether it’s actually appropriate for their needs, and whether the potential benefits outweigh the risks. To realise the benefits of automation and minimise problems for communities and public bodies themselves, a clear-eyed approach which really gets to grips with the risks is needed. 

The temptation to introduce automation to tackle complex social challenges is strong; they are often deep-rooted and expensive to deal with, and can have life-long implications for individuals and communities. But precisely because of their complex nature they are not the best fit for rules-based automated processes, which may fail to deliver what they set out to achieve. 

Bias is increasingly recognised as a critical challenge with automation in the public sector. Bias can be introduced through training data, and can occur when automated tools are disproportionately used on a particular community. In either case, the effectiveness of the tool or process is undermined, and citizens are at risk of discrimination, unfair targeting and exclusion from services. 

Automated tools and processes rely on huge amounts of data; in public services this will often mean personal information and data about us and our lives which we may or may not feel comfortable being used. Balancing everyone’s right to privacy with the desire for efficiency and better outcomes is rarely straightforward, and if done badly can lead to a breakdown in trust…(More)”.

The double-edged sword of AI in education


Article by Rose Luckin: “Artificial intelligence (AI) could revolutionize education as profoundly as the internet has already revolutionized our lives. However, our experience with commercial internet platforms gives us pause. Consider how social media algorithms, designed to maximize engagement and ad revenue, have inadvertently promoted divisive content and misinformation, a development at odds with educational goals.

Like the commercialization of the internet, the AI consumerization trend, driven by massive investments across sectors, prioritizes profit over societal and educational benefits. This focus on monetization risks overshadowing crucial considerations about AI’s integration into educational contexts.

The consumerization of AI in education is a double-edged sword. While increasing accessibility, it could also undermine fundamental educational principles and reshape students’ attitudes toward learning. We must advocate for a thoughtful, education-centric approach to AI development that enhances, rather than replaces, human intelligence and recognises the value of effort in learning.

As generative AI systems for education emerge, technical experts and policymakers have a unique opportunity to ensure their design supports the interests of learners and educators.

Risk 1: Overestimating AI’s intelligence

In essence, learning is not merely an individual cognitive process but a deeply social endeavor, intricately linked to cultural context, language development, and the dynamic relationship between practical experience and theoretical knowledge…(More)”.

The Tech Coup


Book by Marietje Schaake: “Over the past decades, under the cover of “innovation,” technology companies have successfully resisted regulation and have even begun to seize power from governments themselves. Facial recognition firms track citizens for police surveillance. Cryptocurrency has wiped out the personal savings of millions and threatens the stability of the global financial system. Spyware companies sell digital intelligence tools to anyone who can afford them. This new reality—where unregulated technology has become a forceful instrument for autocrats around the world—is terrible news for democracies and citizens.
In The Tech Coup, Marietje Schaake offers a behind-the-scenes account of how technology companies crept into nearly every corner of our lives and our governments. She takes us beyond the headlines to high-stakes meetings with human rights defenders, business leaders, computer scientists, and politicians to show how technologies—from social media to artificial intelligence—have gone from being heralded as utopian to undermining the pillars of our democracies. To reverse this existential power imbalance, Schaake outlines game-changing solutions to empower elected officials and citizens alike. Democratic leaders can—and must—resist the influence of corporate lobbying and reinvent themselves as dynamic, flexible guardians of our digital world.

Drawing on her experiences in the halls of the European Parliament and among Silicon Valley insiders, Schaake offers a frightening look at our modern tech-obsessed world—and a clear-eyed view of how democracies can build a better future before it is too late…(More)”.

AI mass surveillance at Paris Olympics


Article by Anne Toomey McKenna: “The 2024 Paris Olympics is drawing the eyes of the world as thousands of athletes and support personnel and hundreds of thousands of visitors from around the globe converge in France. It’s not just the eyes of the world that will be watching. Artificial intelligence systems will be watching, too.

Government and private companies will be using advanced AI tools and other surveillance tech to conduct pervasive and persistent surveillance before, during and after the Games. The Olympic world stage and international crowds pose increased security risks so significant that in recent years authorities and critics have described the Olympics as the “world’s largest security operations outside of war.”

The French government, hand in hand with the private tech sector, has harnessed that legitimate need for increased security as grounds to deploy technologically advanced surveillance and data gathering tools. Its surveillance plans to meet those risks, including controversial use of experimental AI video surveillance, are so extensive that the country had to change its laws to make the planned surveillance legal.

The plan goes beyond new AI video surveillance systems. According to news reports, the prime minister’s office has negotiated a provisional decree that is classified to permit the government to significantly ramp up traditional, surreptitious surveillance and information gathering tools for the duration of the Games. These include wiretapping; collecting geolocation, communications and computer data; and capturing greater amounts of visual and audio data…(More)”.

The impact of data portability on user empowerment, innovation, and competition


OECD Note: “Data portability enhances access to and sharing of data across digital services and platforms. It can empower users to play a more active role in the re-use of their data and can help stimulate competition and innovation by fostering interoperability while reducing switching costs and lock-in effects. However, the effectiveness of data portability in enhancing competition depends on the terms and conditions of data transfer and the extent to which competitors can make use of the data effectively. Additionally, there are potential downsides: data portability measures may unintentionally stifle competition in fast-evolving markets where interoperability requirements may disproportionately burden SMEs and start-ups. Data portability can also increase digital security and privacy risks by enabling data transfers to multiple destinations. This note presents the following five dimensions essential for designing and implementing data portability frameworks: sectoral scope; beneficiaries; type of data; legal obligations; and operational modality…(More)”.

Community consent: neither a ceiling nor a floor


Article by Jasmine McNealy: “The 23andMe breach and the Golden State Killer case are two of the more “flashy” cases, but questions of consent, especially the consent of all of those affected by biodata collection and analysis in more mundane or routine health and medical research projects, are just as important. The communities of people affected have expectations about their privacy and the possible impacts of inferences that could be made about them in data processing systems. Researchers must, then, acquire community consent when attempting to work with networked biodata. 

Several benefits of community consent exist, especially for marginalized and vulnerable populations. These benefits include:

  • Ensuring that information about the research project spreads throughout the community,
  • Removing potential barriers that might be created by resistance from community members,
  • Alleviating the possible concerns of individuals about the perspectives of community leaders, and 
  • Allowing the recruitment of participants using methods most salient to the community.

But community consent does not replace individual consent and limits exist for both community and individual consent. Therefore, within the context of a biorepository, understanding whether community consent might be a ceiling or a floor requires examining governance and autonomy…(More)”.

Digitally Invisible: How the Internet is Creating the New Underclass


Book by Nicol Turner Lee: “President Joe Biden has repeatedly said that the United States would close the digital divide under his leadership. However, the divide still affects people and communities across the country. The complex and persistent reality is that millions of residents live in digital deserts, and many more face disproportionate difficulties when it comes to getting and staying online, especially people of color, seniors, rural residents, and farmers in remote areas.

Economic and health disparities are worsening in rural communities without available internet access. Students living in urban digital deserts with little technology exposure are ill prepared to compete for emerging occupations. Even seniors struggle to navigate the aging process without access to online information and remote care.

In this book, Nicol Turner Lee, a leading expert on the American digital divide, uses personal stories from individuals around the country to show how the emerging digital underclass is navigating the spiraling online economy, while sharing their joys and hopes for an equitable and just future.

Turner Lee argues that achieving digital equity is crucial for the future of America’s global competitiveness and requires radical responses to offset the unintended consequences of increasing digitization. In the end, “Digitally Invisible” proposes a pathway to more equitable access to existing and emerging technologies, while encouraging readers to weigh in on this shared goal…(More)”.

The Data That Powers A.I. Is Disappearing Fast


Article by Kevin Roose: “For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an “emerging crisis in consent,” as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets — called C4, RefinedWeb and Dolma — 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt.

The study also found that as much as 45 percent of the data in one set, C4, had been restricted by websites’ terms of service.

“We’re seeing a rapid decline in consent to use data across the web that will have ramifications not just for A.I. companies, but for researchers, academics and noncommercial entities,” said Shayne Longpre, the study’s lead author, in an interview.

Data is the main ingredient in today’s generative A.I. systems, which are fed billions of examples of text, images and videos. Much of that data is scraped from public websites by researchers and compiled in large data sets, which can be downloaded and freely used, or supplemented with data from other sources…(More)”.

Governance of deliberative mini-publics: emerging consensus and divergent views


Paper by Lucy J. Parry, Nicole Curato, and , and John S. Dryzek: “Deliberative mini-publics are forums for citizen deliberation composed of randomly selected citizens convened to yield policy recommendations. These forums have proliferated in recent years but there are no generally accepted standards to govern their practice. Should there be? We answer this question by bringing the scholarly literature on citizen deliberation into dialogue with the lived experience of the people who study, design and implement mini-publics. We use Q methodology to locate five distinct perspectives on the integrity of mini-publics, and map the structure of agreement and dispute across them. We find that, across the five viewpoints, there is emerging consensus as well as divergence on integrity issues, with disagreement over what might be gained or lost by adapting common standards of practice, and possible sources of integrity risks. This article provides an empirical foundation for further discussion on integrity standards in the future…(More)”.

The Five Stages Of AI Grief


Essay by Benjamin Bratton: “Alignment” toward “human-centered AI” are just words representing our hopes and fears related to how AI feels like it is out of control — but also to the idea that complex technologies were never under human control to begin with. For reasons more political than perceptive, some insist that “AI” is not even “real,” that it is just math or just an ideological construction of capitalism turning itself into a naturalized fact. Some critics are clearly very angry at the all-too-real prospects of pervasive machine intelligence. Others recognize the reality of AI but are convinced it is something that can be controlled by legislative sessions, policy papers and community workshops. This does not ameliorate the depression felt by still others, who foresee existential catastrophe.

All these reactions may confuse those who see the evolution of machine intelligence, and the artificialization of intelligence itself, as an overdetermined consequence of deeper developments. What to make of these responses?

Sigmund Freud used the term “Copernican” to describe modern decenterings of the human from a place of intuitive privilege. After Nicolaus Copernicus and Charles Darwin, he nominated psychoanalysis as the third such revolution. He also characterized the response to such decenterings as “traumas.”

Trauma brings grief. This is normal. In her 1969 book, “On Death and Dying,” the Swiss psychiatrist Elizabeth Kübler-Ross identified the “five stages of grief”: denial, anger, bargaining, depression and acceptance. Perhaps Copernican Traumas are no different…(More)”.