Novel forms of governance with high levels of civic self-reliance


Thesis by Hiska Ubels: “Enduring depopulation and ageing have affected the liveability of many of the smaller villages in the more peripheral rural municipalities of the Netherlands. Combined with a general climate of austerity and structural public budget cuts, this has led to the search of both communities and local governments for solutions in which citizens take and obtain more responsibilities and higher levels of local autonomy in dealing with local liveability challenges.

This PhD-thesis explores how novel forms of governance with high levels of civic self-reliance can be understood from the perspectives of the involved residents, local governments and the supposed beneficiaries. It also discusses the dynamics, potentials and limitations that come to the fore. To achieve this, firstly, it focusses on the development of role shifts of responsibilities and decision-making power between local governments and citizens in experimental governance initiatives over time and the main factors that enhance and obstruct higher levels of civic autonomy. Then it investigates the influence of government involvement on a civic initiatives’ organisation structure and governance process, and by doing so on the key conditions of its civic self-steering capacity. In addition, it examines how novel governance forms with citizens in the lead are experienced by the community members to whose community liveability they are supposed to contribute. Lastly, it explores the reasons why citizens do not engage in such initiatives….(More)”.

NGOs embrace GDPR, but will it be used against them?


Report by Vera Franz et al: “When the world’s most comprehensive digital privacy law – the EU General Data Protection Regulation (GDPR) – took effect in May 2018, media and tech experts focused much of their attention on how corporations, who hold massive amounts of data, would be affected by the law.

This focus was understandable, but it left some important questions under-examined–specifically about non-profit organizations that operate in the public’s interest. How would non-governmental organizations (NGOs) be impacted? What does GDPR compliance mean in very practical terms for NGOs? What are the challenges they are facing? Could the GDPR be ‘weaponized’ against NGOs and if so, how? What good compliance practices can be shared among non-profits?

Ben Hayes and Lucy Hannah from Data Protection Support & Management and I have examined these questions in detail and released our findings in this report.

Our key takeaway: GDPR compliance is an integral part of organisational resilience, and it requires resources and attention from NGO leaders, foundations and regulators to defend their organisations against attempts by governments and corporations to misuse the GDPR against them.

In a political climate where human rights and social justice groups are under increasing pressure, GDPR compliance needs to be given the attention it deserves by NGO leaders and funders. Lack of compliance will attract enforcement action by data protection regulators and create opportunities for retaliation by civil society adversaries.

At the same time, since the law came into force, we recognise that some NGOs have over-complied with the law, possibly diverting scarce resources and hampering operations.

For example, during our research, we discovered a small NGO that undertook an advanced and resource-intensive compliance process (a Data Protection Impact Assessment or DPIA) for all processing operations. DPIAs are only required for large-scale and high-risk processing of personal data. Yet this NGO, which holds very limited personal data and undertakes no marketing or outreach activities, engaged in this complex and time-consuming assessment because the organization was under enormous pressure from their government. They told us they “wanted to do everything possible to avoid attracting attention.”…

Our research also found that private companies, individuals and governments who oppose the work of an organisation have used GDPR to try to keep NGOs from publishing their work. To date, NGOs have successfully fought against this misuse of the law….(More)“.

Smarter government or data-driven disaster: the algorithms helping control local communities


Release by MuckRock: “What is the chance you, or your neighbor, will commit a crime? Should the government change a child’s bus route? Add more police to a neighborhood or take some away?

Every day government decisions from bus routes to policing used to be based on limited information and human judgment. Governments now use the ability to collect and analyze hundreds of data points everyday to automate many of their decisions.

Does handing government decisions over to algorithms save time and money? Can algorithms be fairer or less biased than human decision making? Do they make us safer? Automation and artificial intelligence could improve the notorious inefficiencies of government, and it could exacerbate existing errors in the data being used to power it.

MuckRock and the Rutgers Institute for Information Policy & Law (RIIPL) have compiled a collection of algorithms used in communities across the country to automate government decision-making.

Go right to the database.

We have also compiled policies and other guiding documents local governments use to make room for the future use of algorithms. You can find those as a project on DocumentCloud.

View policies on smart cities and technologies

These collections are a living resource and attempt to communally collect records and known instances of automated decision making in government….(More)”.

Digital democracy: Is the future of civic engagement online?


Paper by Gianluca Sgueo: “Digital innovation is radically transforming democratic decision-making. Public administrations are experimenting with mobile applications(apps) to provide citizens with real-time information, using online platforms to crowdsource ideas, and testing algorithms to engage communities in day today administration. The key question is what technology breakthrough means for governance systems created long before digital disruption. On the one hand, policy-makers are hoping that technology can be used to legitimise the public sector, re-engage citizens in politics and combat civic apathy. Scholars, on the other hand, point out that, if the digitalisation of democracy is left unquestioned, the danger is that the building blocks of democracy itself will be eroded.

This briefing examines three key global trends that are driving the on-going digitalisation of democratic decision-making. First are demographic patterns. These highlight growing global inequalities. Ten years from now, in the West the differentials of power among social groups will be on the rise, whereas in Eastern countries democratic freedoms will be at risk of further decline.

Second, a more urbanised global population will make cities ideal settings for innovative approaches to democratic decision-making. Current instances of digital democracy being used at local level include blockchain technology for voting and online crowdsourcing platforms.

Third, technological advancements will cut the costs of civic mobilisation and pose new challenges for democratic systems. Going forward, democratic decision-makers will be required to bridge digital literacy gaps, secure public structures from hacking, and to protect citizens’ privacy….(More)”.

Re-imagining “Action Research” as a Tool for Social Innovation and Public Entrepreneurship


Stefaan G. Verhulst at The GovLab: “We live in challenging times. From climate change to economic inequality and forced migration, the difficulties confronting decision-makers are unprecedented in their variety, as well as in their complexity and urgency. Our standard policy toolkit seems stale and ineffective while existing governance institutions are increasingly outdated and distrusted.

To tackle today’s challenges, we need not only new solutions but new ways of arriving at solutions. In particular, we need fresh research methodologies that can provide actionable insights on 21st century conditions. Such methodologies would allow us to redesign how decisions are made, how public services are offered, and how complex problems are solved around the world. 

Rethinking research is a vast project, with multiple components. This new essay focuses on one particular area of research: action research. In the essay, I first explain what we mean by action research, and also explore some of its potential. I subsequently argue that, despite that potential, action research is often limited as a method because it remains embedded in past methodologies; I attempt to update both its theory and practice for the 21st century.

Although this article represents only a beginning, my broader goal is to re-imagine the role of action research for social innovation, and to develop an agenda that could provide for what Amar Bhide calls “practical knowledge” at all levels of decision making in a systematic, sustainable, and responsible manner.  (Full Essay Here).”

Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI


Paper by Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika: “The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these “AI principles,” there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.

To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.

Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus…(More)”.

Artificial intelligence, geopolitics, and information integrity


Report by John Villasenor: “Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity….(More)”.

10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade


Future of Privacy Forum: “Today, FPF is publishing a white paper co-authored by CEO Jules Polonetsky and hackylawyER Founder Elizabeth Renieris to help corporate officers, nonprofit leaders, and policymakers better understand privacy risks that will grow in prominence during the 2020s, as well as rising technologies that will be used to help manage privacy through the decade. Leaders must understand the basics of technologies like biometric scanning, collaborative robotics, and spatial computing in order to assess how existing and proposed policies, systems, and laws will address them, and to support appropriate guidance for the implementation of new digital products and services.

The white paper, Privacy 2020: 10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade, identifies ten technologies that are likely to create increasingly complex data protection challenges. Over the next decade, privacy considerations will be driven by innovations in tech linked to human bodies, health, and social networks; infrastructure; and computing power. The white paper also highlights ten developments that can enhance privacy – providing cause for optimism that organizations will be able to manage data responsibly. Some of these technologies are already in general use, some will soon be widely deployed, and others are nascent….(More)”.

The Gray Spectrum: Ethical Decision Making with Geospatial and Open Source Analysis


Report by The Stanley Center for Peace and Security: “Geospatial and open source analysts face decisions in their work that can directly or indirectly cause harm to individuals, organizations, institutions, and society. Though analysts may try to do the right thing, such ethically-informed decisions can be complex. This is particularly true for analysts working on issues related to nuclear nonproliferation or international security, analysts whose decisions on whether to publish certain findings could have far-reaching consequences.

The Stanley Center for Peace and Security and the Open Nuclear Network (ONN) program of One Earth Future Foundation convened a workshop to explore these ethical challenges, identify resources, and consider options for enhancing the ethical practices of geospatial and open source analysis communities.

This Readout & Recommendations brings forward observations from that workshop. It describes ethical challenges that stakeholders from relevant communities face. It concludes with a list of needs participants identified, along with possible strategies for promoting sustaining behaviors that could enhance the ethical conduct of the community of nonproliferation analysts working with geospatial and open source data.

Some Key Findings

  • A code of ethics could serve important functions for the community, including giving moral guidance to practitioners, enhancing public trust in their work, and deterring unethical behavior. Participants in the workshop saw a significant value in such a code and offered ideas for developing one.
  • Awareness of ethical dilemmas and strong ethical reasoning skills are essential for sustaining ethical practices, yet professionals in this field might not have easy access to such training. Several approaches could improve ethics education for the field overall, including starting a body of literature, developing model curricula, and offering training for students and professionals.
  • Other stakeholders—governments, commercial providers, funders, organizations, management teams, etc.—should contribute to the discussion on ethics in the community and reinforce sustaining behaviors….(More)”.

Rheomesa. A New Global System for Catastrophe Prevention, Response & Recovery


Paper by Andrew Doss, Jonas Bedford-Strohm and Leanne Erdberg Steadman: “This paper identifies three structural vacuums in catastrophe governance today that allow for the greatest risks humanity faces to be externalized from decision-making. To mitigate the impact of these risks, The Rheomesa (“fluid table”) provides (1) a deliberative decision-making process between currently siloed entities in various sectors managing the outcome of catastrophes, including government, the private sector, NGOs, IGOs, and hybrid entities, with (2) a prospective, long-term accountability and incentive mechanism that (3) comprehensively addresses the three interdependent tasks societies face surrounding catastrophes – prevention, response, and recovery….(More)”.