The role and power of re-patterning in systems change


Blog by Griffith University Yunus Centre: “In simple terms, patterns are interconnected behaviours, relationships and structures that together make up a picture of what ‘common practice’ looks like and how it is ultimately experienced by people interacting with and in a system.

If we take Public Services in the twenty-first century here are some examples.

Public Service organisations have most often been formed around concepts such as universal access, service delivery, social safety nets, and public provision of critical infrastructure. Built into these elements are patterns, like:

Patterns of relationships: based on objectivity, universalism, professional relationships.

Patterns of resourcing: focused on rationing, efficiency, programmatic resource flows.

Patterns of power: centred on professional expertise, needs assessments, deserving access to spaces and services.

On the surface, these are not necessarily negative and there have doubtless been many successes enabling broad access to services and infrastructure.

It’s also true though that there remain many who have not benefited, who have missed out on access or opportunity, and who have actually been harmed by and within the system.

What is needed is a foundation for public systems that moves away from goals of access to more and better servicing of communities, and towards goals around learning how we can promote patterns of thriving, aspiration, success and ‘wellbeing’…(More)”.

A graphic that shows an organic shape much like mycelium representing how everyday behaviours, mindsets, structures, practices, interactions, values are interconnected and fractal. It shows how we only SEE a tiny bit of this on the surface but most of it is invisible.
The organic patterns of systems. The Yunus Centre Griffith and Auckland Co-Design Lab 2022

Responsible AI licenses: a practical tool for implementing the OECD Principles for Trustworthy AI


Article by Carlos Muñoz Ferrandis: “Recent socio-ethical concerns on the development, use, and commercialization of AI-related products and services have led to the emergence of new types of licenses devoted to promoting the responsible use of AI systems: Responsible AI Licenses, or RAILs.

RAILs are AI-specific licenses that include restrictions on how the licensee can use the AI feature due to the licensor’s concerns about the technical capabilities and limitations of the AI feature. This approach concerns the two existing types of these licenses. The RAIL license can be used for ML models, source code, applications and services, and data. When these licenses allow free access and flexible downstream distribution of the licensed AI feature, they are OpenRAIL

Author: Danish Contractor, co-author of the BigScience OpenRAIL-M and chair of the RAIL Initiative

The RAIL Initiative was created in 2019 to encourage the industry to adopt use restrictions in licenses as a way to mitigate the risks of misuse and potential harm caused by AI systems…(More)”.

Data and displacement: Ethical and practical issues in data-driven humanitarian assistance for IDPs


Blog by Vicki Squire: “Ten years since the so-called “data revolution” (Pearn et al, 2022), the rise of “innovation” and the proliferation of “data solutions” has rendered the assessment of changing data practices within the humanitarian sector ever more urgent. New data acquisition modalities have provoked a range of controversies across multiple contexts and sites (e.g. Human Rights Watch, 20212022a2022b). Moreover, a range of concerns have been raised about data sharing (e.g. Fast, 2022) and the inequities embedded within humanitarian data (e.g. Data Values, 2022).

With this in mind, the Data and Displacement project set out to explore the practical and ethical implications of data-driven humanitarian assistance in two contexts characterised by high levels of internal displacement: north-eastern Nigeria and South Sudan. Our interdisciplinary research team includes academics from each of the regions under analysis, as well as practitioners from the International Organization for Migration. From the start, the research was designed to centre the lived experiences of Internally Displaced Persons (IDPs), while also shedding light on the production and use of humanitarian data from multiple perspectives.

We conducted primary research during 2021-2022. Our research combines dataset analysis and visualisation techniques with a thematic analysis of 174 semi-structured qualitative interviews. In total we interviewed 182 people: 42 international data experts, donors, and humanitarian practitioners from a range of governmental and non-governmental organisations; 40 stakeholders and practitioners working with IDPs across north-eastern Nigeria and South Sudan (20 in each region); and 100 IDPs in camp-like settings (50 in each region). Our findings point to a disconnect between international humanitarian standards and practices on the ground, the need to revisit existing ethical guidelines such informed consent, and the importance of investing in data literacies…(More)”.

Cutting through complexity using collective intelligence


Blog by the UK Policy Lab: “In November 2021 we established a Collective Intelligence Lab (CILab), with the aim of improving policy outcomes by tapping into collective intelligence (CI). We define CI as the diversity of thought and experience that is distributed across groups of people, from public servants and domain experts to members of the public. We have been experimenting with a digital tool, Pol.is, to capture diverse perspectives and new ideas on key government priority areas. To date we have run eight debates on issues as diverse as Civil Service modernisation, fisheries management and national security. Across these debates over 2400 civil servants, subject matter experts and members of the public have participated…

From our experience using CILab on live policy issues, we have identified a series of policy use cases that echo findings from the government of Taiwan and organisations such as Nesta. These use cases include: 1) stress-testing existing policies and current thinking, 2) drawing out consensus and divergence on complex, contentious issues, and 3) identifying novel policy ideas

1) Stress-testing existing policy and current thinking

CI could be used to gauge expert and public sentiment towards existing policy ideas by asking participants to discuss existing policies and current thinking on Pol.is. This is well suited to testing public and expert opinions on current policy proposals, especially where their success depends on securing buy-in and action from stakeholders. It can also help collate views and identify barriers to effective implementation of existing policy.

From the initial set of eight CILab policy debates, we have learnt that it is sometimes useful to design a ‘crossover point’ into the process. This is where part way through a debate, statements submitted by policymakers, subject matter experts and members of the public can be shown to each other, in a bid to break down groupthink across those groups. We used this approach in a Pol.is debate on a topic relating to UK foreign policy, and think it could help test how existing policies on complex areas such as climate change or social care are perceived within and outside government…(More)”

Is digital feedback useful in impact evaluations? It depends.


Article by Lois Aryee and Sara Flanagan: “Rigorous impact evaluations are essential to determining program effectiveness. Yet, they are often time-intensive and costly, and may fail to provide the rapid feedback necessary for informing real-time decision-making and course corrections along the way that maximize programmatic impact. Capturing feedback that’s both quick and valuable can be a delicate balance.

In an ongoing impact evaluation we are conducting in Ghana, a country where smoking rates among adolescent girls are increasing with alarming health implications, we have been evaluating a social marketing campaign’s effectiveness at changing girls’ behavior and reducing smoking prevalence with support from the Bill & Melinda Gates Foundation. Although we’ve been taking a traditional approach to this impact evaluation using a year-long, in-person panel survey, we were interested in using digital feedback as a means to collect more timely data on the program’s reach and impact. To do this, we explored several rapid digital feedback approaches including social media, text message, and Interactive Voice Response (IVR) surveys to determine their ability to provide quicker, more actionable insights into the girls’ awareness of, engagement with, and feelings about the campaign. 

Digital channels seemed promising given our young, urban population of interest; however, collecting feedback this way comes with considerable trade-offs. Digital feedback poses risks to both equity and quality, potentially reducing the population we’re able to reach and the value of the information we’re able to gather. The truth is that context matters, and tailored approaches are critical when collecting feedback, just as they are when designing programs. Below are three lessons to consider when adopting digital feedback mechanisms into your impact evaluation design. 

Lesson 1: A high number of mobile connections does not mean the target population has access to mobile phones. ..

Lesson 2: High literacy rates and “official” languages do not mean most people are able to read and write easily in a particular language...

Lesson 3: Gathering data on taboo topics may benefit from a personal touch. …(More)”.

The EU wants to put companies on the hook for harmful AI


Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough. 

Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.  

The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.

Minben 民本 as an alternative to liberal democracy


Essay by Rongxin Li: “Although theorists have proposed non-Western types of democracy, such as Asian Democracy, they have nevertheless actively marginalised these non-Western types. This is partly due to Asian Democracy’s  inextricable link with Confucian traditions – many of which people commonly assume to be anti-democratic. This worry over Confucian values does not, however, detract from the fact that scholars are deliberately ignoring non-Western types of democracy because they do not follow Western narratives. ..

Minben is a paternalistic model of democracy. It does not involve multi-party elections and, unlike in liberal democracy, disorderly public participation is not one of its priorities. Minben relies on a theory of governance that believes carefully selected elites, usually a qualified minority, can use their knowledge and the constant pursuit of virtuous conduct to deliver the common good.

Liberal democracy maintains its legitimacy through periodic and competitive elections. Minben retains its legitimacy through its ‘output’. It is results, or policy implementation, oriented. Some argue that this performance-driven democracy cannot endure because it depends on people buying into it and consistently supporting it. But we could say the same of any democratic regime. Liberal democracy’s legitimacy is not unassailable – nor is it guaranteed.

Indeed, liberal democracy and Minben have more in common than many Western theorists concede. As Yu Keping underlined, stability is paramount in Chinese Communist Party ideology. John Keane, for example, once likened government and its legitimacy to a slippery egg. The greater the social instability, which may be driven by displeasure over the performance of ruling elites, the slipperier the egg becomes for the elites in question. Both liberal democratic regimes and Minben regimes face the same problem of dealing with social turmoil. Both look to serving the people as a means to staying atop the egg…

Minben – and this may surprise certain Western theorists – does not exclude public participation and deliberation. These instruments convey public voices and concerns to the selected technocrats tasked with deciding for the people. There is representation based on consultation here. Technocrats seek to make good decisions based on full consultation and analysis of public preferences…(More)”.

What does AI Localism look like in action? A new series examining use cases on how cities govern AI


Series by Uma Kalkar, Sara Marcucci, Salwa Mansuri, and Stefaan Verhulst: “…We call local instances of AI governance ‘AI Localism.’ AI Localism refers to the governance actions—which include, but are not limited to, regulations, legislations, task forces, public committees, and locally-developed tools—taken by local decision-makers to address the use of AI within a city or regional state.

It is necessary to note, however, that the presence of AI Localism does not mean that robust national- and state-level AI policy are not needed. Whereas local governance seems fundamental in addressing local, micro-level issues, tailoring, for instance, by implementing policies for specific AI use circumstances, national AI governance should act as a key tool to complement local efforts and provide cities with a cohesive, guiding direction.

Finally, it is important to mention how AI Localism is not necessarily good governance of AI at the local level. Indeed, there have been several instances where local efforts to regulate and employ AI have encroached on public freedoms and hurt the public good….

Examining the current state of play in AI localism

To this end, The Governance Lab (The GovLab) has created the AI Localism project to collect a knowledge base and inform a taxonomy on the dimensions of local AI governance (see below). This initiative began in 2020 with the AI Localism canvas, which captures the frames under which local governance methods are developing. This series presents current examples of AI localism across the seven canvas frames of: 

  • Principles and Rights: foundational requirements and constraints of AI and algorithmic use in the public sector;
  • Laws and Policies: regulation to codify the above for public and private sectors;
  • Procurement: mandates around the use of AI in employment and hiring practices; 
  • Engagement: public involvement in AI use and limitations;
  • Accountability and Oversight: requirements for periodic reporting and auditing of AI use;
  • Transparency: consumer awareness about AI and algorithm use; and
  • Literacy: avenues to educate policymakers and the public about AI and data.

In this eight-part series, released weekly, we will present current examples of each frame of the AI localism canvas to identify themes among city- and state-led legislative actions. We end with ten lessons on AI localism for policymakers, data and AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter.’…(More)”.

Income Inequality Is Rising. Are We Even Measuring It Correctly?


Article by Jon Jachimowicz et al: “Income inequality is on the rise in many countries around the world, according to the United Nations. What’s more, disparities in global income were exacerbated by the COVID-19 pandemic, with some countries facing greater economic losses than others.

Policymakers are increasingly focusing on finding ways to reduce inequality to create a more just and equal society for all. In making decisions on how to best intervene, policymakers commonly rely on the Gini coefficient, a statistical measure of resource distribution, including wealth and income levels, within a population. The Gini coefficient measures perfect equality as zero and maximum inequality as one, with higher numbers indicating a greater concentration of resources in the hands of a few.

This measure has long dominated our understanding (pdf) of what inequality means, largely because this metric is used by governments around the world, is released by statistics bureaus in multiple countries, and is commonly discussed in news media and policy discussions alike.

In our paper, recently published in Nature Human Behaviour, we argue that researchers and policymakers rely too heavily on the Gini coefficient—and that by broadening our understanding of how we measure inequality, we can both uncover its impact and intervene to more effectively correct It…(More)”.

Macroscopes


Exhibit by Places and Spaces: “The term “macroscope” may strike many as being strange or even daunting. But actually, the term becomes friendlier when placed within the context of more familiar “scopes.” For instance, most of us have stared through a microscope. By doing so, we were able to see tiny plant or animal cells floating around before our very eyes. Similarly, many of us have peered out through a telescope into the night sky. There, we were able to see lunar craters, cloud belts on Jupiter, or the phases of Mercury. What both of these scopes have in common is that they allow the viewer to see objects that could otherwise not be perceived by the naked eye, either because they are too small or too distant.

But what if we want to better understand the complex systems or networks within which we operate and which have a profound, if often unperceived, impact on our lives? This is where macroscopes become such useful tools. They allow us to go beyond our focus on the single organism, the single social or natural phenomenon, or the single development in technology. Instead, macroscopes allow us to gather vast amounts of data about many kinds of organisms, environments, and technologies. And from that data, we can analyze and comprehend the way these elements co-exist, compete, or cooperate.

With the macroscope, we are allowed to see the “big picture,” a goal imagined in 1979 by Joël de Rosnay in his groundbreaking book, The Macroscope: A New World Scientific System. For the author, the macroscope would be the “symbol of a new way of seeing and understanding.” It was to be a tool “not used to make things larger or smaller but to observe what is at once too great, too slow, and too complex for our eyes.”

With these needs and insights in mind, the second decade of the Places & Spaces exhibit will invite and showcase interactive visualizations—our own exemplars of de Rosnay’s macroscope—that demonstrate the impact of different data cleaning, analysis, and visualization algorithms. It is the exhibit’s hope that this view of the “behind the scenes” process of data visualization will increase the ability of viewers to gain meaningful insights from such visualizations and empower people from all backgrounds to use data more effectively and endeavor to create maps that address their own needs and interests…(More)”.