Article by Neil Britto; Suparno Banerjee and Constanza Movsichoff: “The challenges associated with the design, development and maintenance of digital urban infrastructure are substantial and have to balance the needs and incentives of both public and private stakeholders. While proofs of concepts and test-beds have been tried and are often successful, scaling these to city scale has been challenging for a number of reasons:
Scope. There is too often a focus on solutions that address narrow aspects of the city’s needs.
Capital requirements. Many cities do not have adequate capital for deploying solutions at scale and might struggle to attract investment from the private sector.
Procurement. Procurement models favor vendor-buyer relationships as opposed to multi-year, multi-enterprise, complex partnerships.
Time scales. Some of the most pressing challenges that cities face will need multiple years to address. These complex journeys need partnerships that can withstand the pressures of time, budgets and expectations.
Data. A nuanced understanding of public concern over data sourcing and use can be critical for a successful public-private collaboration. These dynamics contribute to the unique challenges and opportunities for smart city public-private collaborations that range from intelligent street lighting to broadband access.
In recognition of these challenges, the World Economic Forum’s G20 Global Smart Cities Alliance assembled a taskforce to look for best practices and model policies in the area of public-private collaborations in 2021. That taskforce, comprised of experts and officers from cities, companies and institutions deeply involved in smart city projects, compiled case studies, insights and feedback from across the sector. As members of that taskforce, we are happy to provide a distillation of these resources in the form of our new Primer for Smart City Public Private Collaborations…(More)”.
Report by Temilola Afolabi: “Residential segregation is related to inequalities in education, job opportunities, political power, access to credit, access to health care, and more. Steering, redlining, mortgage lending discrimination, and other historic policies have all played a role in creating this state of affairs.
Over time, federal efforts including the Fair Housing Act and Home Mortgage Disclosure Act have been designed to improve housing equity in the United States. While these laws have not been entirely effective, they have made new kinds of data available—data that can shed light on some of the historic drivers of housing inequity and help inform tailored solutions to their ongoing impact.
This report explores a number of current opportunities to strengthen longstanding data-driven tools to address housing equity. The report also shows how the effects of mortgage lending discrimination and other historic practices are still being felt today. At the same time, it outlines opportunities to apply data to increase equity in many areas related to the homeownership gap, including negative impacts on health and well-being, socioeconomic disparities, and housing insecurity….(More)”.
Essay series introduced by Bessma Momani, Aaron Shull and Jean-François Bélanger: “…begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.
Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.
Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).
In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.
Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored…(More)”.
Guide by Siddharth Peter de Souza and Lisa Hahn: “..interactive workbook for socio-legal research projects. It employs the idea of a “lab” as a space for interactive and experiential learning. As an introductory book, it addresses researchers of all levels who are beginning to explore interdisciplinary research on law and are looking for guidance on how to do so. Likewise, the book can be used by teachers and peer groups to experiment with teaching and thinking about law in action through lab-based learning…
The book covers themes and questions that may arise during a socio-legal research project. This starts with examining what research and interdisciplinarity mean and in which forms they can be practiced. After an overview of the research process, we will discuss how research in action is often unpredictable and messy. Thus, the practical and ethical challenges of doing research will be discussed along with processes of knowledge production and assumptions that we have as researchers.
Conducting a socio-legal research project further requires an overview of the theoretical landscape. We will introduce general debates about the nature, functions, and effects of law in society. Further, common dichotomies in socio-legal research such as “law” and “the social” or “qualitative” and “quantitative” research will be explored, along with suggested ways on how to bridge them.
Turning to the application side of socio-legal research, the book delves deeper into questions of data on law and society, where to collect it and how to deal with it in a reflexive manner. It discusses different methods of qualitative socio-legal research and offers ways in which they can be experienced through exercises and simulations. In the research process, generating research results is followed by publishing and communicating them. We will explore different ways to ensure the outreach and impact of one’s research by communicating results through journals, blogs or social media. Finally, the book also discusses academia as a social space and the value of creating and using networks and peer groups for mutual support.
Overall, the workbook is designed to accompany and inspire researchers on their way through a socio-legal research project and to empower the reader into thinking more creatively about their methods, while at the same time demystifying them…(More)”.
OECD Report: “The digital age provides great opportunities to transform how public services are designed and delivered. The OECD Good Practice Principles for Service Design and Delivery in the Digital Age provide a clear, actionable and comprehensive set of objectives for the high-quality digital transformation of public services. Reflecting insights gathered from across OECD member countries, these nine principles are arranged under three pillars of “Build accessible, ethical and equitable public services that prioritise user needs, rather than government needs”; “Deliver with impact, at scale and with pace”; and “Be accountable and transparent in the design and delivery of public services to reinforce and strengthen public trust”. The principles are advisory rather than prescriptive, allowing for local interpretation and implementation. They should also be considered in conjunction with wider OECD work to equip governments in harnessing the potential of digital technology and data to improve outcomes for all…(More)”.
Report by Evan D. Peet, Brian G. Vegetabile, Matthew Cefalu, Joseph D. Pane, Cheryl L. Damberg: “Machine learning (ML) can have a significant impact on public policy by modeling complex relationships and augmenting human decisionmaking. However, overconfidence in results and incorrectly interpreted algorithms can lead to peril, such as the perpetuation of structural inequities. In this Perspective, the authors give an overview of ML and discuss the importance of its interpretability. In addition, they offer the following recommendations, which will help policymakers develop trustworthy, transparent, and accountable information that leads to more-objective and more-equitable policy decisions: (1) improve data through coordinated investments; (2) approach ML expecting interpretability, and be critical; and (3) leverage interpretable ML to understand policy values and predict policy impacts…(More)”.
Report by Anthony McCosker, Frances Shaw, Xiaofang Yao and Kath Albury: “As community services rapidly digitise, they are generating more data than ever before. These transformations are leading to innovation in data analysis and enthusiasm about the potential for data-driven decision making. However, increased use of personal data and automated systems raises ethical issues including gaining community trust, and introduces challenges in building knowledge, skills and capability.
Despite optimism across the not-for-profit (NFP) sector about the use of data analysis and automation to improve services and social impact, we are already seeing a growing data divide. Private sector companies have for some time invested heavily in data science and machine learning. However, many in the NFP sector are unsure how to meet the demands of these digital and data transformations. With limited resources, small, medium and large organisations alike face challenges in building their data capability and channelling it toward improved social outcomes. Working with marginalised clients, collecting sensitive personal information, and tackling seemingly intractable cycles of disadvantage, the sector needs a data capability revolution.
This short guide sets out a Data Capability Framework developed with and for the NFP sector and explains how it can be used to raise the bar in the use of data for impact and innovation. It conceptualises the core dimensions of data capability that need to be addressed. These dimensions can be tailored to meet an organisation’s specific strategic goals, impact and outcomes.
The Framework distils the challenges and successes of organisations we have worked with. It represents both the factors that underpin effective data capability and the pathways to achieving it. In other words, as technologies and data science techniques continue to change, data capability is both an outcome to aspire to, and a dynamic, ongoing process of experimentation and adaption…(More)”.
Report by Ellen P. Goodman and Julia Tréhu: “.. finds that auditing could be a robust means for holding AI systems accountable, but today’s auditing regimes are not yet adequate to the job. The report assesses the effectiveness of various auditing regimes and proposes guidelines for creating trustworthy auditing systems.
Various government and private entities rely on or have proposed audits as a way of ensuring AI systems meet legal, ethical and other standards. This report finds that audits can in fact provide an agile co-regulatory approach—one that relies on both governments and private entities—to ensure societal accountability for algorithmic systems through private oversight.
But the “algorithmic audit” remains ill-defined and inexact, whether concerning social media platforms or AI systems generally. The risk is significant that inadequate audits will obscure problems with algorithmic systems. A poorly designed or executed audit is at best meaningless and at worst even excuses harms that the audits claim to mitigate.
Inadequate audits or those without clear standards provide false assurance of compliance with norms and laws, “audit-washing” problematic or illegal practices. Like green-washing and ethics-washing before, the audited entity can claim credit without doing the work.
The paper identifies the core specifications needed in order for algorithmic audits to be a reliable AI accountability mechanism:
“Who” conducts the audit—clearly defined qualifications, conditions for data access, and guardrails for internal audits;
“What” is the type and scope of audit—including its position within a larger sociotechnical system;
“Why” is the audit being conducted—whether for narrow legal standards or broader ethical goals, essential for audit comparison, along with potential costs; and
“How” are the audit standards determined—an important baseline for the development of audit certification mechanisms and to guard against audit-washing.
Algorithmic audits have the potential to increase the reliability and innovation of technology in the twenty-first century, much as financial audits transformed the way businesses operated in the twentieth century. They will take different forms, either within a sector or across sectors, especially for systems that pose the highest risk. Ensuring that AI is accountable and trusted is key to ensuring that democracies remain centers of innovation while shaping technology to democratic values…(More)”
Report by Sara Marcucci, Uma Kalkar, and Stefaan Verhulst: “…serves as a primer for policymakers and practitioners to learn about current governance practices and inspire their own work in the field. In this report, we present the fundamentals of AI governance, the value proposition of such initiatives, and their application in cities worldwide to identify themes among city- and state-led governance actions. We close with ten lessons on AI localism for policymakers, data, AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter’, which include:
Principles provide a North Star for governance;
Public engagement provides a social license;
AI literacy enables meaningful engagement;
Tap into local expertise;
Innovate in how transparency is provided;
Establish new means for accountability and oversight;
Signal boundaries through binding laws and policies;
Use procurement to shape responsible AI markets;
Establish data collaboratives to tackle asymmetries; and
Make good governance strategic.
Considered together, we look to use our understanding of governance practices, local AI governance examples, and the ten overarching lessons to create an incipient framework for implementing and assessing AI localism initiatives in cities around the world….(More)”
Press Release: “The Commission welcomes the agreement reached yesterday with the Parliament and the Council on the European declaration on digital rights and principles. The declaration, proposed in January, establishes a clear reference point about the kind of human-centred digital transformation that the EU promotes and defends, at home and abroad.
It builds on key EU values and freedoms and will benefit all individuals and businesses. The declaration will also provide a guide for policymakers and companies when dealing with new technologies. The declaration focuses on six key areas: putting people at the centre of the digital transformation; solidarity and inclusion; freedom of choice; participation in digital life; safety and security; and sustainability…(More)” See also: European Digital Rights and Principles