The City as a Commons Reloaded: from the Urban Commons to Co-Cities Empirical Evidence on the Bologna Regulation


Chapter by Elena de Nictolis and Christian Iaione: “The City of Bologna is widely recognized for an innovative regulatory framework to enable urban commons. The “Regulation on public collaboration for the Urban Commons” produced more than 400 pacts of collaboration and was adopted by more than 180 Italian cities so far.

The chapter presents an empirical assessment of 280 pacts (2014-2016). The analytical approach is rooted in political economy (Polany 1944; Ahn & Ostrom 2003) and quality of democracy analysis (Diamond & Morlino, 2005). It investigates whether a model of co-governance applied to urban assets as commons impacts on the democratic qualities of equality and rule of law at the urban level. The findings suggest suggests that legal recognition of the urban commons is not sufficient if not coupled with an experimentalist policymaking approach to institutionally redesign the City as a platform enabling collective action of multi-stakeholder partnerships that should be entrusted with the task to trigger neighborhood-based sustainable development. Neighborhood scale investments that aim to seed community economic ventures emerge as a possible way to overcome the shortcomings of the first policy experiments. They also suggest the need for more investigation by scholars on the inclusiveness and diversity facets related to the implementation of urban commons policies….(More)”

Public Administration and Democracy: The Virtue and Limit of Participatory Democracy as a Democratic Innovation


Paper by Sirvan karimi: “The expansion of public bureaucracy has been one of the most significant developments that has marked societies particularly, Western liberal democratic societies. Growing political apathy, citizen disgruntlement and the ensuing decline in electoral participation reflects the political nature of governance failures. Public bureaucracy, which has historically been saddled with derogatory and pejorative connotations, has encountered fierce assaults from multiple fronts. Out of theses sharp criticisms of public bureaucracy that have emanated from both sides of the ideological spectrum, attempts have been made to popularize and advance citizen participation in both policy formulation and policy implementation processes as innovations to democratize public administration. Despite their virtue, empowering connotations and spirit-uplifting messages to the public, these proposed practices of democratic innovations not only have their own shortcomings and are conducive to exacerbating the conditions that they are directed to ameliorate but they also have the potential to undermine the traditional administrative and political accountability mechanisms….(More)”.

Engaging with the public about algorithmic transparency in the public sector


Blog by the Centre for Data Ethics and Innovation (UK): “To move the recommendation that we made in our review into bias in algorithmic decision-making forward, we have been working with the Central Digital and Data Office (CDDO) and BritainThinks to scope what a transparency obligation could look like in practice, and in particular, which transparency measures would be most effective at increasing public understanding about the use of algorithms in the public sector. 

Due to the low levels of awareness about the use of algorithms in the public sector (CDEI polling in July 2020 found that 38% of the public were not aware that algorithmic systems were used to support decisions using personal data), we opted for a deliberative public engagement approach. This involved spending time gradually building up participants’ understanding and knowledge about algorithm use in the public sector and discussing their expectations for transparency, and co-designing solutions together. 

For this project, we worked with a diverse range of 36 members of the UK public, spending over five hours engaging with them over a three week period. We focused on three particular use-cases to test a range of emotive responses – policing, parking and recruitment.  

The final stage was an in-depth co-design session, where participants worked collaboratively to review and iterate prototypes in order to develop a practical approach to transparency that reflected their expectations and needs for greater openness in the public sector use of algorithms. 

What did we find? 

Our research validated that there was fairly low awareness or understanding of the use of algorithms in the public sector. Algorithmic transparency in the public sector was not a front-of-mind topic for most participants.

However, once participants were introduced to specific examples of potential public sector algorithms, they felt strongly that transparency information should be made available to the public, both citizens and experts. This included desires for; a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks and technicalities of the algorithm…(More)”.

Examining the Intersection of Behavioral Science and Advocacy


Introduction to Special Collection of the Behavioral Scientist by Cintia Hinojosa and Evan Nesterak: “Over the past year, everyone’s lives have been touched by issues that intersect science and advocacy—the pandemic, climate change, police violence, voting, protests, the list goes on. 

These issues compel us, as a society and individuals, toward understanding. We collect new data, design experiments, test our theories. They also inspire us to examine our personal beliefs and values, our roles and responsibilities as individuals within society. 

Perhaps no one feels these forces more than social and behavioral scientists. As members of fields dedicated to the study of social and behavioral phenomena, they are in the unique position of understanding these issues from a scientific perspective, while also navigating their inevitable personal impact. This dynamic brings up questions about the role of scientists in a changing world. To what extent should they engage in advocacy or activism on social and political issues? Should they be impartial investigators, active advocates, something in between? 

t also raises other questions, like does taking a public stance on an issue affect scientific integrity? How should scientists interact with those setting policies? What happens when the lines between an evidence-based stance and a political position become blurred? What should scientists do when science itself becomes a partisan issue? 

To learn more about how social and behavioral scientists are navigating this terrain, we put out a call inviting them to share their ideas, observations, personal reflections, and the questions they’re grappling with. We gave them 100-250 words to share what was on their mind. Not easy for such a complex and consequential topic.

The responses, collected and curated below, revealed a number of themes, which we’ve organized into two parts….(More)”.

Mass, Computer-Generated, and Fraudulent Comments


Report by Steven J. Balla et al: “This report explores three forms of commenting in federal rulemaking that have been enabled by technological advances: mass, fraudulent, and computer-generated comments. Mass comments arise when an agency receives a much larger number of comments in a rulemaking than it typically would (e.g., thousands when the agency typically receives a few dozen). The report focuses on a particular type of mass comment response, which it terms a “mass comment campaign,” in which organizations orchestrate the submission of large numbers of identical or nearly identical comments. Fraudulent comments, which we refer to as “malattributed comments” as discussed below, refer to comments falsely attributed to persons by whom they were not, in fact, submitted. Computer-generated comments are generated not by humans, but rather by software algorithms. Although software is the product of human actions, algorithms obviate the need for humans to generate the content of comments and submit comments to agencies.

This report examines the legal, practical, and technical issues associated with processing and responding to mass, fraudulent, and computer-generated comments. There are cross-cutting issues that apply to each of these three types of comments. First, the nature of such comments may make it difficult for agencies to extract useful information. Second, there are a suite of risks related to harming public perceptions about the legitimacy of particular rules and the rulemaking process overall. Third, technology-enabled comments present agencies with resource challenges.

The report also considers issues that are unique to each type of comment. With respect to mass comments, it addresses the challenges associated with receiving large numbers of comments and, in particular, batches of comments that are identical or nearly identical. It looks at how agencies can use technologies to help process comments received and at how agencies can most effectively communicate with public commenters to ensure that they understand the purpose of the notice-and-comment process and the particular considerations unique to processing mass comment responses. Fraudulent, or malattributed, comments raise legal issues both in criminal and Administrative Procedure Act (APA) domains. They also have the potential to mislead an agency and pose harms to individuals. Computer-generated comments may raise legal issues in light of the APA’s stipulation that “interested persons” are granted the opportunity to comment on proposed rules. Practically, it can be difficult for agencies to distinguish computer-generated comments from traditional comments (i.e., those submitted by humans without the use of software algorithms).

While technology creates challenges, it also offers opportunities to help regulatory officials gather public input and draw greater insights from that input. The report summarizes several innovative forms of public participation that leverage technology to supplement the notice and comment rulemaking process.

The report closes with a set of recommendations for agencies to address the challenges and opportunities associated with new technologies that bear on the rulemaking process. These recommendations cover steps that agencies can take with respect to technology, coordination, and docket management….(More)”.

Sandwich Strategy


Article by the Accountability Research Center: “The “sandwich strategy” describes an interactive process in which reformers in government encourage citizen action from below, driving virtuous circles of mutual empowerment between pro-accountability actors in both state and society.

The sandwich strategy relies on mutually-reinforcing interaction between pro-reform actors in both state and society, not just initiatives from one or the other arena. The hypothesis is that when reformers in government tangibly reduce the risks/costs of collective action, that process can bolster state-society pro-reform coalitions that collaborate for change. While this process makes intuitive sense, it can follow diverse pathways and encounter many roadblocks. The dynamics, strengths and limitations of sandwich strategies have not been documented and analyzed systematically. The figure below shows a possible pathway of convergence and conflict between actors for and against change in both state and society….(More)”.

sandwich strategy

Be Skeptical of Thought Leaders


Book Review by Evan Selinger: “Corporations regularly advertise their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washingvirtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”…(More)“.

Living in Data: A Citizen’s Guide to a Better Information Future


Book by Jer Thorp: “To live in data in the twenty-first century is to be incessantly extracted from, classified and categorized, statisti-fied, sold, and surveilled. Data—our data—is mined and processed for profit, power, and political gain. In Living in Data, Thorp asks a crucial question of our time: How do we stop passively inhabiting data, and instead become active citizens of it?

Threading a data story through hippo attacks, glaciers, and school gymnasiums, around colossal rice piles, and over active minefields, Living in Data reminds us that the future of data is still wide open, that there are ways to transcend facts and figures and to find more visceral ways to engage with data, that there are always new stories to be told about how data can be used.

Punctuated with Thorp’s original and informative illustrations, Living in Data not only redefines what data is, but reimagines who gets to speak its language and how to use its power to create a more just and democratic future. Timely and inspiring, Living in Data gives us a much-needed path forward….(More)”.

Living Labs for Public Sector Innovation: An Integrative Literature Review


Paper by Lars Fuglsang, Anne Vorre Hansen, Ines Mergel, and Maria Taivalsaari Røhnebæk: “The public administration literature and adjacent fields have devoted increasing attention to living labs as environments and structures enabling the co-creation of public sector innovation. However, living labs remain a somewhat elusive concept and phenomenon, and there is a lack of understanding of its versatile nature. To gain a deeper understanding of the multiple dimensions of living labs, this article provides a review assessing how the environments, methods, and outcomes of living labs are addressed in the extant research literature. The findings are drawn together in a model synthesizing how living labs link to public sector innovation, followed by an outline of knowledge gaps and future research avenues….(More)”.

A fair data economy is built upon collaboration


Report by Heli Parikka, Tiina Härkönen and Jaana Sinipuro: “For a human-driven and fair data economy to work, it must be based on three important and interconnected aspects: regulation based on ethical values; technology; and new kinds of business models. With a human-driven approach, individual and social interests determine the business conditions and data is used to benefit individuals and society.

When developing a fair data economy, the aim has been to use existing technologies, operating models and concepts across the boundaries between different sectors. The goal is to enable not only new data-based business but also easier digital everyday life that is based on the more efficient and personal management of data. The human-driven approach is closely linked to the MyData concept.

At the beginning of the IHAN project, there were very few easy-to-use, individually tailored digital services. For example, the most significant data-based consumer services were designed on the basis of the needs of large corporations. To create demand, prevailing mindsets had to be changed and decision-makers needed to be encouraged to change direction, companies had to find new business with new business models and individuals had to be persuaded to demand change.

The terms and frameworks of the platform and data economies needed further clarification for the development of a fair data economy. We sought out examples from other sectors and found that, in addition to “human-driven”, another defining concept that emerged was “fair”, with fairness defined as a key goal in the IHAN project. A fair model also takes financial aspects into account and recognises the significance of companies and new services as a source of well-being.

Why did Sitra want to tackle this challenge to begin with? What had thus far been available to people was an unfair data economy model, which needed to be changed. The data economy direction had been defined by a handful of global companies, whose business models are based on collecting and managing data on their own platforms and on their own terms. There was a need to develop an alternative, a European data economy model.

One of the tasks of the future fund is to foresee future trends, the fair and human-driven use of data being one of them. The objective was to approach the theme in a pluralistic manner from the perspectives of different participants in society. Sitra’s unique position as an independent future fund made it possible to launch the project.

A fair data economy has become one of Sitra’s strategic spearheads and a new theme is being prepared at the time of the writing of this publication. The lessons learned and tools created so far will be moved under that theme and developed further, making them available to everyone who needs them….(More)“.