Paper by John M. Ulimwengu: “This paper presents a novel framework for assessing resilience in food systems, focusing on three dynamic metrics: return time, magnitude of deviation, and recovery rate. Traditional resilience measures have often relied on static and composite indicators, creating gaps in understanding the complex responses of food systems to shocks. This framework addresses these gaps, providing a more nuanced assessment of resilience in agrifood sectors. It highlights how integrating dynamic metrics enables policymakers to design tailored, sector-specific interventions that enhance resilience. Recognizing the data intensity required for these metrics, the paper indicates how emerging satellite imagery and advancements in artificial intelligence (AI) can make data collection both high-frequency and location-specific, at a fraction of the cost of traditional methods. These technologies facilitate a scalable approach to resilience measurement, enhancing the accuracy, timeliness, and accessibility of resilience data. The paper concludes with recommendations for refining resilience tools and adapting policy frameworks to better respond to the increasing challenges faced by food systems across the world…(More)”.
Rethinking the Measurement of Resilience for
Setting the Standard: Statistical Agencies’ Unique Role in Building Trustworthy AI
Article by Corinna Turbes: “As our national statistical agencies grapple with new challenges posed by artificial intelligence (AI), many agencies face intense pressure to embrace generative AI as a way to reach new audiences and demonstrate technological relevance. However, the rush to implement generative AI applications risks undermining these agencies’ fundamental role as authoritative data sources. Statistical agencies’ foundational mission—producing and disseminating high-quality, authoritative statistical information—requires a more measured approach to AI adoption.
Statistical agencies occupy a unique and vital position in our data ecosystem, entrusted with creating the reliable statistics that form the backbone of policy decisions, economic planning, and social research. The work of these agencies demands exceptional precision, transparency, and methodological rigor. Implementation of generative AI interfaces, while technologically impressive, could inadvertently compromise the very trust and accuracy that make these agencies indispensable.
While public-facing interfaces play a valuable role in democratizing access to statistical information, statistical agencies need not—and often should not—rely on generative AI to be effective in that effort. For statistical agencies, an extractive AI approach – which retrieves and presents existing information from verified databases rather than generating new content – offers a more appropriate path forward. By pulling from verified, structured datasets and providing precise, accurate responses, extractive AI systems can maintain the high standards of accuracy required while making statistical information more accessible to users who may find traditional databases overwhelming. An extractive, rather than generative, approach allows agencies to modernize data delivery while preserving their core mission of providing reliable, verifiable statistical information…(More)”
Revealed: bias found in AI system used to detect UK benefits fraud
Article by Robert Booth: “An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.
An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.
The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.
The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.
This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.
But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.
Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system…(More)”.
The Collaboration Playbook: A leader’s guide to cross-sector collaboration
Playbook by Ian Taylor and Nigel Ball: “The challenges facing our societies and economies today are so large and complex that, in many cases, cross-sector collaboration is not a choice, but an imperative. Yet collaboration remains elusive for many, often being put into the ‘too hard’ category. This playbook offers guidance on how we can seize collaboration opportunities successfully and rise to the challenges.
The recommendations in the playbook were informed by academic literature and practitioner experience. Rather than offer a procedural, step-by-step guide, this playbook offers provoking questions and frameworks that applies to different situations and objectives. While formal aspects such as contracts and procedures are well understood, it was found that what was needed was guidance on the intangible elements, sometimes referred to as ‘positive chemistry’. The significance of aspects like leadership, trust, culture, learning and power in cross-sector collaborations can be the game-changers for productive endeavours but are hard to get right.
Structured around these five key themes, the playbook presents 18 discreet ‘plays’ for effective collaboration. The plays allow the reader to delve into specific areas of interest to gain a deeper understanding of what it means for their collaborative work.
The intention of the playbook is to provide a resource that informs and guides cross-sector leaders. It will be especially relevant for those working in, and partnering with, central and local government in an effort to improve social outcomes…(More)”.
Predictability, AI, And Judicial Futurism: Why Robots Will Run The Law And Textualists Will Like It
Paper by Jack Kieffaber: “The question isn’t whether machines are going to replace judges and lawyers—they are. The question is whether that’s a good thing. If you’re a textualist, you have to answer yes. But you won’t—which means you’re not a textualist. Sorry.
Hypothetical: The year is 2030. AI has far eclipsed the median federal jurist as a textual interpreter. A new country is founded; it’s a democratic republic that uses human legislators to write laws and programs a state-sponsored Large Language Model called “Judge.AI” to apply those laws to facts. The model makes judicial decisions as to conduct on the back end, but can also provide advisory opinions on the front end; if a citizen types in his desired action and hits “enter,” Judge.AI will tell him, ex ante, exactly what it would decide ex post if the citizen were to perform the action and be prosecuted. The primary result is perfect predictability; secondary results include the abolition of case law, the death of common law, and the replacement of all judges—indeed, all lawyers—by a single machine. Don’t fight the hypothetical, assume it works. This article poses the question: Is that a utopia or a dystopia?
If you answer dystopia, you cannot be a textualist. Part I of this article establishes why: Because predictability is textualism’s only lodestar, and Judge.AI is substantially more predictable than any regime operating today. Part II-A dispatches rebuttals premised on positive nuances of the American system; such rebuttals forget that my hypothetical presumes a new nation and take for granted how much of our nation’s founding was premised on mitigating exactly the kinds of human error that Judge.AI would eliminate. And Part II-B dispatches normative rebuttals, which ultimately amount to moral arguments about objective good—which are none of the textualist’s business.
When the dust clears, you have only two choices: You’re a moralist, or you’re a formalist. If you’re the former, you’ll need a complete account of the objective good—which has evaded man for his entire existence. If you’re the latter, you should relish the fast-approaching day when all laws and all lawyers are usurped by a tin box. But you’re going to say you’re something in between. And you’re not…(More)”
Rethinking Theories of Governance
Book by Christopher Ansell: “Are theories of governance useful for helping policymakers and citizens meet and tackle contemporary challenges? This insightful book reflects on how a theory becomes useful and evaluates a range of theories according to whether they are warranted, diagnostic, and dialogical.
By arguing that useful theory tells us what to ask, not what to do, Christopher Ansell investigates what it means for a theory to be useful. Analysing how governance theories address a variety of specific challenges, chapters examine intractable public problems, weak government accountability, violent conflict, global gridlock, poverty and the unsustainable exploitation of our natural resources. Finding significant tensions between state- and society-centric perspectives on governance, the book concludes with a suggestion that we refocus our theories of governance on possibilities for state-society synergy. Governance theories of the future, Ansell argues, should also strive for a more fruitful dialogue between instrumental, critical and explanatory perspectives.
Examining both the conceptual and empirical basis of theories of governance, this comprehensive book will be an invigorating read for scholars and students in the fields of public administration, public policy and planning, development studies, political science and urban, environmental and global governance. By linking theories of governance to concrete societal challenges, it will also be of use to policymakers and practitioners concerned with these fields…(More)”.
The Next Phase of the Data Economy: Economic & Technological Perspectives
Paper by Jad Esber et al: The data economy is poised to evolve toward a model centered on individual agency and control, moving us toward a world where data is more liquid across platforms and applications. In this future, products will either utilize existing personal data stores or create them when they don’t yet exist, empowering individuals to fully leverage their own data for various use cases.
The analysis begins by establishing a foundation for understanding data as an economic good and the dynamics of data markets. The article then investigates the concept of personal data stores, analyzing the historical challenges that have limited their widespread adoption. Building on this foundation, the article then considers how recent shifts in regulation, technology, consumer behavior, and market forces are converging to create new opportunities for a user-centric data economy. The article concludes by discussing potential frameworks for value creation and capture within this evolving paradigm, summarizing key insights and potential future directions for research, development, and policy.
We hope this article can help shape the thinking of scholars, policymakers, investors, and entrepreneurs, as new data ownership and privacy technologies emerge, and regulatory bodies around the world mandate open flows of data and new terms of service intended to empower users as well as small-to-medium–sized businesses…(More)”.
The Emergence of National Data Initiatives: Comparing proposals and initiatives in the United Kingdom, Germany, and the United States
Article by Stefaan Verhulst and Roshni Singh: “Governments are increasingly recognizing data as a pivotal asset for driving economic growth, enhancing public service delivery, and fostering research and innovation. This recognition has intensified as policymakers acknowledge that data serves as the foundational element of artificial intelligence (AI) and that advancing AI sovereignty necessitates a robust data ecosystem. However, substantial portions of generated data remain inaccessible or underutilized. In response, several nations are initiating or exploring the launch of comprehensive national data strategies designed to consolidate, manage, and utilize data more effectively and at scale. As these initiatives evolve, discernible patterns in their objectives, governance structures, data-sharing mechanisms, and stakeholder engagement frameworks reveal both shared principles and country-specific approaches.
This blog seeks to start some initial research on the emergence of national data initiatives by examining three national data initiatives and exploring their strategic orientations and broader implications. They include:
- The United Kingdom’s proposed National Data Library (NDL),
- Germany’s National Data Institute, and
- the proposed National Secure Data Service (NSDS) in the United States…(More)”.
Bad data costs Americans trillions. Let’s fix it with a renewed data strategy
Article by Nick Hart & Suzette Kent: “Over the past five years, the federal government lost $200-to-$500 billion per year in fraud to improper payments — that’s up to $3,000 taken from every working American’s pocket annually. Since 2003, these preventable losses have totaled an astounding $2.7 trillion. But here’s the good news: We already have the data and technology to greatly eliminate this waste in the years ahead. The operational structure and legal authority to put these tools to work protecting taxpayer dollars needs to be refreshed and prioritized.
The challenge is straightforward: Government agencies often can’t effectively share and verify basic information before sending payments. For example, federal agencies may not be able to easily check if someone is deceased, verify income or detect duplicate payments across programs…(More)”.
Collective Intelligence: The Rise of Swarm Systems and their Impact on Society
Book edited by Uwe Seebacher and Christoph Legat: “Unlock the future of technology with this captivating exploration of swarm intelligence. Dive into the future of autonomous systems, enhanced by cutting-edge multi-agent systems and predictive research. Real-world examples illustrate how these algorithms drive intelligent, coordinated behavior in industries like manufacturing and energy. Discover the innovative Industrial-Disruption-Index (IDI), pioneered by Uwe Seebacher, which predicts industry disruptions using swarm intelligence. Case studies from media to digital imaging offer invaluable insights into the future of industrial life cycles.
Ideal for AI enthusiasts and professionals, this book provides inspiring, actionable insights for the future. It redefines artificial intelligence, showcasing how predictive intelligence can revolutionize group coordination for more efficient and sustainable systems. A crucial chapter highlights the shift from the Green Deal to the Emerald Deal, showing how swarm intelligence addresses societal challenges…(More)”.