Revolutionizing Governance: AI-Driven Citizen Engagement


Article by Komal Goyal: “Government-citizen engagement has come a long way over the past decade, with governments increasingly adopting AI-powered analytics, automated processes and chatbots to engage with citizens and gain insights into their concerns. A 2023 Stanford University report found that the federal government spent $3.3 billion on AI in the fiscal year 2022, highlighting the remarkable upswing in AI adoption across various government sectors.

As the demands of a digitally empowered and information-savvy society constantly evolve, it is becoming imperative for government agencies to revolutionize how they interact with their constituents. I’ll discuss how AI can help achieve this and pave the way for a more responsive, inclusive and effective form of governance…(More)”.

Data Is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data


Paper by Daniel J. Solove: “Heightened protection for sensitive data is becoming quite trendy in privacy laws around the world. Originating in European Union (EU) data protection law and included in the EU’s General Data Protection Regulation, sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories of sensitive data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, and biometric and genetic data.

Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use nonsensitive data as a proxy for certain types of sensitive data.

Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. With Big Data and powerful machine learning algorithms, most nonsensitive data give rise to inferences about sensitive data. In many privacy laws, data giving rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.

This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake—they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does.

To be effective, privacy law must focus on harm and risk rather than on the nature of personal data…(More)”.

Will governments ever learn? A study of current provision and the key gaps


Paper by Geoff Mulgan: “The paper describes the history of training from ancient China onwards and the main forms it now takes. It suggests 10 areas where change may be needed and goes onto discuss how skills are learned, suggesting the need for more continuous learning and new approaches to capacity.

I hope anyone interested in this field will at least find it stimulating. I couldn’t find an overview of this kind available and so tried to fill the gap, if only with a personal view. This topic is particularly important for the UK which allowed its training system to collapse over the last decade. But the issues are relevant everywhere since the capacity of governments arguably has more impact on human wellbeing than anything else…(More)”.

The story of the R number: How an obscure epidemiological figure took over our lives


Article by Gavin Freeguard: “Covid-19 did not only dominate our lives in April 2020. It also dominated the list of new words entered into the Oxford English Dictionary.

Alongside Covid-19 itself (noun, “An acute respiratory illness in humans caused by a coronavirus”), the vocabulary of the virus included “self-quarantine”, “social distancing”, “infodemic”, “flatten the curve”, “personal protective equipment”, “elbow bump”, “WFH” and much else. But nestled among this pantheon of new pandemic words was a number, one that would shape our conversations, our politics, our lives for the next 18 months like no other: “Basic reproduction number (R0): The average number of cases of an infectious disease arising by transmission from a single infected individual, in a population that has not previously encountered the disease.”

graphic

“There have been many important figures in this pandemic,” wrote The Times in January 2021, “but one has come to tower over the rest: the reproduction rate. The R number, as everyone calls it, has been used by the government to justify imposing and lifting lockdowns. Indeed while there are many important numbers — gross domestic product, parliamentary majorities, interest rates — few can compete right now with R” (tinyurl.com/v7j6cth9).

Descriptions of it at the start of the pandemic made R the star of the disaster movie reality we lived through. And it wasn’t just a breakout star of the UK’s coronavirus press conferences; in Germany, (then) Chancellor Angela Merkel made the most of her scientific background to explain the meaning of R and its consequences to the public (tinyurl.com/mva7urw5).

But for others, the “obsession” (Professor Linda Bauld, University of Edinburgh) with “the pandemic’s misunderstood metric” (Naturetinyurl.com/y3sr6n6m) has been “a distraction”, an “unhelpful focus”; as the University of Edinburgh’s Professor Mark Woolhouse told one parliamentary select committee, “we’ve created a monster”.

How did this epidemiological number come to dominate our discourse? How useful is it? And where does it come from?…(More)”.

Defending the rights of refugees and migrants in the digital age


Primer by Amnesty International: “This is an introduction to the pervasive and rapid deployment of digital technologies in asylum and migration management systems across the globe including the United States, United Kingdom and the European Union. Defending the rights of refugees and migrants in the digital age, highlights some of the key digital technology developments in asylum and migration management systems, in particular systems that process large quantities of data, and the human rights issues arising from their use. This introductory briefing aims to build our collective understanding of these emerging technologies and hopes to add to wider advocacy efforts to stem their harmful effects…(More)”.

AI for Good: Applications in Sustainability, Humanitarian Action, and Health


Book by Juan M. Lavista Ferres, and William B. Weeks: “…delivers an insightful and fascinating discussion of how one of the world’s most recognizable software companies is tackling intractable social problems with the power of artificial intelligence (AI). In the book, you’ll see real in-the-field examples of researchers using AI with replicable methods and reusable AI code to inspire your own uses.

The authors also provide:

  • Easy-to-follow, non-technical explanations of what AI is and how it works
  • Examples of the use of AI for scientists working on mitigating climate change, showing how AI can better analyze data without human bias, remedy pattern recognition deficits, and make use of satellite and other data on a scale never seen before so policy makers can make informed decisions
  • Real applications of AI in humanitarian action, whether in speeding disaster relief with more accurate data for first responders or in helping address populations that have experienced adversity with examples of how analytics is being used to promote inclusivity
  • A deep focus on AI in healthcare where it is improving provider productivity and patient experience, reducing per-capita healthcare costs, and increasing care access, equity, and outcomes
  • Discussions of the future of AI in the realm of social benefit organizations and efforts…(More)”

The Cult of AI


Article by Robert Evans: “…Cult members are often depicted in the media as weak-willed and foolish. But the Church of Scientology — long accused of being a cult, an allegation they have endlessly denied — recruits heavily among the rich and powerful. The Finders, a D.C.-area cult that started in the 1970s, included a wealthy oil-company owner and multiple members with Ivy League degrees. All of them agreed to pool their money and hand over control of where they worked and how they raised their children to their cult leader. Haruki Murakami wrote that Aum Shinrikyo members, many of whom were doctors or engineers, “actively sought to be controlled.”

Perhaps this feels like a reach. But the deeper you dive into the people — and subcultures that are pushing AI forward — the more cult dynamics you begin to notice.

I should offer a caveat here: There’s nothing wrong with the basic technology we call “AI.” That wide banner term includes tools as varied as text- or facial-recognition programs, chatbots, and of course sundry tools to clone voices and generate deepfakes or rights-free images with odd numbers of fingers. CES featured some real products that harnessed the promise of machine learning (I was particularly impressed by a telescope that used AI to clean up light pollution in images). But the good stuff lived alongside nonsense like “ChatGPT for dogs” (really just an app to read your dog’s body language) and an AI-assisted fleshlight for premature ejaculators. 

And, of course, bad ideas and irrational exuberance are par for the course at CES. Since 1967, the tech industry’s premier trade show has provided anyone paying attention with a preview of how Big Tech talks about itself, and our shared future. But what I saw this year and last year, from both excited futurist fanboys and titans of industry, is a kind of unhinged messianic fervor that compares better to Scientology than to the iPhone…(More)”.

Why Machines Learn: The Elegant Maths Behind Modern AI


Book by Anil Ananthaswamy: “Machine-learning systems are making life-altering decisions for us: approving mortgage loans, determining whether a tumour is cancerous, or deciding whether someone gets bail. They now influence discoveries in chemistry, biology and physics – the study of genomes, extra-solar planets, even the intricacies of quantum systems.

We are living through a revolution in artificial intelligence that is not slowing down. This major shift is based on simple mathematics, some of which goes back centuries: linear algebra and calculus, the stuff of eighteenth-century mathematics. Indeed by the mid-1850s, a lot of the groundwork was all done. It took the development of computer science and the kindling of 1990s computer chips designed for video games to ignite the explosion of AI that we see all around us today. In this enlightening book, Anil Ananthaswamy explains the fundamental maths behind AI, which suggests that the basics of natural and artificial intelligence might follow the same mathematical rules…(More)”.

Governable Spaces: Democratic Design for Online Life


Book by Nathan Schneider: “When was the last time you participated in an election for a Facebook group or sat on a jury for a dispute in a subreddit? Platforms nudge users to tolerate nearly all-powerful admins, moderators, and “benevolent dictators for life.” In Governable Spaces, Nathan Schneider argues that the internet has been plagued by a phenomenon he calls “implicit feudalism”: a bias, both cultural and technical, for building communities as fiefdoms. The consequences of this arrangement matter far beyond online spaces themselves, as feudal defaults train us to give up on our communities’ democratic potential, inclining us to be more tolerant of autocratic tech CEOs and authoritarian tendencies among politicians. But online spaces could be sites of a creative, radical, and democratic renaissance. Using media archaeology, political theory, and participant observation, Schneider shows how the internet can learn from governance legacies of the past to become a more democratic medium, responsive and inventive unlike anything that has come before…(More)”.

Governing Data and AI to Protect Inner Freedoms Includes a Role for IP


Article by Giuseppina (Pina) D’Agostino and Robert Fay: “Generative artificial intelligence (AI) has caught regulators everywhere by surprise. Its ungoverned and growing ubiquity is similar to that of the large digital platforms that play an important role in the work and personal lives of billions of individuals worldwide. These platforms rely on advertising revenue dependent on user data derived from numerous undisclosed sources, including through covert tracking of interactions on digital platforms, surveillance of conversations, monitoring of activity across platforms and acquisition of biometric data through immersive virtual reality games, just to name a few.

This complex milieu creates a suite of public policy challenges. One of the most important yet least explored is the intersection of intellectual property (IP), data governance, AI and the platforms’ underlying business model. The global scale, the quasi-monopolistic dominance enjoyed by the large platforms, and their control over data and data analytics have explicit implications for fundamental human rights, including freedom of thought…(More)”.