Technology rules? The advent of new technologies in the justice system


Report by The Justice and Home Affairs Committee (House of Lords): “In recent years, and without many of us realising it, Artificial Intelligence has begun to permeate every aspect of our personal and professional lives. We live in a world of big data; more and more decisions in society are being taken by machines using algorithms built from that data, be it in healthcare, education, business, or consumerism. Our Committee has limited its investigation to only one area–how these advanced technologies are used in our justice system. Algorithms are being used to improve crime detection, aid the security categorisation of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline.

We began our work on the understanding that Artificial Intelligence (AI), used correctly, has the potential to improve people’s lives through greater efficiency, improved productivity. and in finding solutions to often complex problems. But while acknowledging the many benefits, we were taken aback by the proliferation of Artificial Intelligence tools potentially being used without proper oversight, particularly by police forces across the country. Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time.

When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties. At what point could someone be imprisoned on the basis of technology that cannot be explained? Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening. Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market. And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality–despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
This is particularly concerning in light of evidence we heard of dubious selling practices and claims made by vendors as to their products’ effectiveness which are often untested and unproven…(More)”.

Evidence is a policymaker’s biggest weapon


Report by Jacquelyn Zhang: “Fundamentally, public policy is supposed to address serious social problems. However, poorly designed policies exist. Often this happens when a well-intentioned policy generates unexpected and unintended consequences, and sometimes, these consequences leave policymakers farther away from their goal than when they started.

Consider just a few examples.

The first is the impact of an immigration law that was used in the United States ostensibly to control the flow of undocumented immigrants into the country. The controversial bill imposes extreme restrictions on undocumented immigrants in the state of Alabama and limits every aspect of immigrants’ lives.

By employing a synthetic control methodology, the bill proved to have a substantial and negative unintended effect – an increase in violent crimes. This could be linked back to the bill because while violent crime increased, property crime did not.

This may be because the passage of one of the country’s strictest anti-immigration laws signalled to the community that the system had more tolerance for discrimination against undocumented immigrants in Alabama, fuelling distrust and eventually violent conflict.

This is not a freak event either. Policymakers know that enacting laws doesn’t just change the wording of legislation. It shapes social norms, prescribes attitudes, and affects community behaviour. Of course, this is also why good policy-making can be so productive…(More)”.

Artificial Intelligence for Children


WEF Toolkit: “Children and youth are surrounded by AI in many of the products they use in their daily lives, from social media to education technology, video games, smart toys and speakers. AI determines the videos children watch online, their curriculum as they learn, and the way they play and interact with others.

This toolkit, produced by a diverse team of youth, technologists, academics and business leaders, is designed to help companies develop trustworthy artificial intelligence (AI) for children and youth and to help parents, guardians, children and youth responsibly buy and safely use AI products.

AI can be used to educate and empower children and youth and have a positive impact on society. But children and youth can be especially vulnerable to the potential risks posed by AI, including bias, cybersecurity and lack of accessibility. AI must be designed inclusively to respect the rights of the child user. Child-centric design can protect children and youth from the potential risks posed by the technology.

AI technology must be created so that it is both innovative and responsible. Responsible AI is safe, ethical, transparent, fair, accessible and inclusive. Designing responsible and trusted AI is good for consumers, businesses and society. Parents, guardians and adults all have the responsibility to carefully select ethically designed AI products and help children use them safely.

What is at stake? AI will determine the future of play, childhood, education and societies. Children and youth represent the future, so everything must be done to support them to use AI responsibly and address the challenges of the future.

This toolkit aims to help responsibly design, consume and use AI. It is designed to help companies, designers, parents, guardians, children and youth make sure that AI respects the rights of children and has a positive impact in their lives…(More)”.

How Do We End Wars? A Peace Researcher Puts Forward Some Innovative Approaches


Interview by Theodor Schaarschmidt: “Thania Paffenholz is an expert in international relations, based in Switzerland and Kenya, who conducts research on sustainable peace processes and advises institutions such as the United Nations, the European Union and the Organization for Security and Co-operation in Europe (OSCE). She is executive director of Inclusive Peace, a think tank that accompanies peace processes worldwide. Paffenholz talked with Spektrum der Wissenschaftthe German-language edition of Scientific American, about new ways to think about peacekeeping…

It is absurd that the fate of the country is mainly discussed by men older than 60, as is usual in this type of negotiation. Where is the rest of the population? What about women? What about younger people? Do they really want the same things as those in power? How can their perspectives be carried into the peace processes? There are now concepts for inclusive negotiation in which delegations from civil society discuss issues together with the leaders. In Eastern Europe, however, there are only a few examples of this….(More)”.

Decolonize Data


Essay by Nithya Ramanathan, Jim Fruchterman, Amy Fowler & Gabriele Carotti-Sha: “The social sector aims to empower communities with tools and knowledge to effect change for themselves, because community-driven change is more likely to drive sustained impact than attempts to force change from the outside. This commitment should include data, which is increasingly essential for generating social impact. Today the effective implementation and continuous improvement of social programs all but requires the collection and analysis of data.

But all too often, social sector practitioners, including researchers, extract data from individuals, communities, and countries for their own purposes, and do not even make it available to them, let alone enable them to draw their own conclusions from it. With data flows the power to make informed decisions.

It is therefore counterproductive, and painfully ironic, that we have ignored our fundamental principles when it comes to data. We see donors and leading practitioners making a sincere move to decolonize aid. However, if we are truly committed to decolonizing the practices in aid, then we must also examine the ownership and flow of data.

Decolonizing data would not only help ensure that the benefits of data accrue directly to the rightful data owners but also open up more intentional data sharing driven by the rightful data owners—the communities we claim to empower…(More)”.

Lessons from the COVID data wizards


Article by Lynne Peeples: “In March 2020, Beth Blauer started hearing anecdotally that COVID-19 was disproportionately affecting Black people in the United States. But the numbers to confirm that disparity were “very limited”, says Blauer, a data and public-policy specialist at Johns Hopkins University in Baltimore, Maryland. So, her team, which had developed one of the most popular tools for tracking the spread of COVID-19 around the world, added a new graphic to their website: a colour-coded map tracking which US states were — and were not — sharing infection and death data broken down by race and ethnicity.

They posted the map to their data dashboard — the Coronavirus Resource Center — in mid-April 2020 and promoted it through social media and blogs. At the time, just 26 states included racial information with their death data. “Then we started to see the map rapidly filling in,” says Blauer. By the middle of May 2020, 40 states were reporting that information. For Blauer, the change showed that people were paying attention. “And it confirmed that we have the ability to influence what’s happening here,” she says.

COVID-19 dashboards mushroomed around the world in 2020 as data scientists and journalists shifted their work to tracking and presenting information on the pandemic — from infection and death rates, to vaccination data and other variables. “You didn’t have any data set before that was so essential to how you plan your life,” says Lisa Charlotte Muth, a data designer and blogger at Datawrapper, a Berlin-based company that helps newsrooms and journalists to enrich their reporting with embeddable charts. “The weather, maybe, was the closest thing you could compare it to.” The growth in the service’s popularity was impressive. In January 2020 — before the pandemic — Datawrapper had 260 million chart views on its clients’ websites. By April that year, that monthly figure had shot up to more than 4.7 billion.

Policymakers, too, have leaned on COVID-19 data dashboards and charts to guide important decisions. And they had hundreds of local and global examples to reference, including academic enterprises such as the Coronavirus Resource Center, as well as government websites and news-media projects…(More)”.

Social-media reform is flying blind


Paper by Chris Bail: “As Russia continues its ruthless war in Ukraine, pundits are speculating what social-media platforms might have done years ago to undermine propaganda well before the attack. Amid accusations that social media fuels political violence — and even genocide — it is easy to forget that Facebook evolved from a site for university students to rate each other’s physical attractiveness. Instagram was founded to facilitate alcohol-based gatherings. TikTok and YouTube were built to share funny videos.

The world’s social-media platforms are now among the most important forums for discussing urgent social problems, such as Russia’s invasion of Ukraine, COVID-19 and climate change. Techno-idealists continue to promise that these platforms will bring the world together — despite mounting evidence that they are pulling us apart.

Efforts to regulate social media have largely stalled, perhaps because no one knows what something better would look like. If we could hit ‘reset’ and redesign our platforms from scratch, could we make them strengthen civil society?

Researchers have a hard time studying such questions. Most corporations want to ensure studies serve their business model and avoid controversy. They don’t share much data. And getting answers requires not just making observations, but doing experiments.

In 2017, I co-founded the Polarization Lab at Duke University in Durham, North Carolina. We have created a social-media platform for scientific research. On it, we can turn features on and off, and introduce new ones, to identify those that improve social cohesion. We have recruited thousands of people to interact with each other on these platforms, alongside bots that can simulate social-media users.

We hope our effort will help to evaluate some of the most basic premises of social media. For example, tech leaders have long measured success by the number of connections people have. Anthropologist Robin Dunbar has suggested that humans struggle to maintain meaningful relationships with more than 150 people. Experiments could encourage some social-media users to create deeper connections with a small group of users while allowing others to connect with anyone. Researchers could investigate the optimal number of connections in different situations, to work out how to optimize breadth of relationships without sacrificing depth.

A related question is whether social-media platforms should be customized for different societies or groups. Although today’s platforms seem to have largely negative effects on US and Western-Europe politics, the opposite might be true in emerging democracies (P. Lorenz-Spreen et al. Preprint at https://doi.org/hmq2; 2021). One study suggested that Facebook could reduce ethnic tensions in Bosnia–Herzegovina (N. Asimovic et al. Proc. Natl Acad. Sci. USA 118, e2022819118; 2021), and social media has helped Ukraine to rally support around the world for its resistance….(More)”.

Humans in the Loop


Paper by Rebecca Crootof, Margot E. Kaminski and W. Nicholson Price II: “From lethal drones to cancer diagnostics, complex and artificially intelligent algorithms are increasingly integrated into decisionmaking that affects human lives, raising challenging questions about the proper allocation of decisional authority between humans and machines. Regulators commonly respond to these concerns by putting a “human in the loop”: using law to require or encourage including an individual within an algorithmic decisionmaking process.

Drawing on our distinctive areas of expertise with algorithmic systems, we take a bird’s eye view to make three generalizable contributions to the discourse. First, contrary to the popular narrative, the law is already profoundly (and problematically) involved in governing algorithmic systems. Law may explicitly require or prohibit human involvement and law may indirectly encourage or discourage human involvement, all without regard to what we know about the strengths and weaknesses of human and algorithmic decisionmakers and the particular quirks of hybrid human-machine systems. Second, we identify “the MABA-MABA trap,” wherein regulators are tempted to address a panoply of concerns by “slapping a human in it” based on presumptions about what humans and algorithms are respectively better at doing, often without realizing that the new hybrid system needs its own distinct regulatory interventions. Instead, we suggest that regulators should focus on what they want the human to do—what role the human is meant to play—and design regulations to allow humans to play these roles successfully. Third, borrowing concepts from systems engineering and existing law regulating railroads, nuclear reactors, and medical devices, we highlight lessons for regulating humans in the loop as well as alternative means of regulating human-machine systems going forward….(More)”.

What counts’ as accountability, and who decides?


Working Paper by Jonathan Fox: “Accountability is often treated as a magic bullet, an all-purpose solution to a very wide range of problems—from corrupt politicians or the quality of public service provision to persistent injustice and impunity. The concept has become shorthand to refer to diverse efforts to address problems with the exercise of power. In practice, the accountability idea is malleable, ambiguous — and contested.

This working paper unpacks diverse understandings of accountability ideas, using the ‘keywords’ approach. This tradition takes everyday big ideas whose meanings are often taken for granted and makes their subtexts explicit. The proposition here is that ambiguous or contested language can either constrain or enable possible strategies for promoting accountability. After all, different potential coalition partners may use the same term with different meanings—or may use different terms to communicate the same idea. Indeed, the concept’s fundamental ambiguity is a major reason why it can be difficult to communicate ideas about accountability across disciplines, cultures, and languages. The goal here is to inform efforts to find common ground between diverse potential constituencies for accountable governance.

This analysis is informed by dialogue with advocates and reformers from many countries and sectors, many of whom share their ideas in blogposts on the Accountability Keywords website (see also #AccountabilityKeyword on social media). Both the working paper and blogposts reflect on accountability-related words and sayings that resonate with popular cultures, to get a better handle on what sticks.

The format of the working paper is nonlinear, designed so that readers can go right to the keywords that spark their interest:

  • The introduction maps the landscape of accountability keywords.
  • Section 2 addresses what counts as accountability?
  • Section 3 identifies big concepts that overlap with accountability but are not synonyms- such as good governance, democracy, responsiveness and responsibility.
  • Section 4 shows the relevance of accountability adjectives by spelling out different ways in which the idea is understood.
  • Section 5 unpacks widely used, emblematic keywords in the field.
  • Section 6 considers more specialized keywords, focusing on examples that serve as shorthand for big ideas within specific communities of practice.
  • Section 7 brings together a range of widely-used accountability sayings, from the ancient to the recently-invented—illustrating the enduring and diverse nature of accountability claims.
  • Section 8 makes a series of propositions for discussion…(More)”.

Open Data for Social Impact Framework


Framework by Microsoft: “The global pandemic has shown us the important role of data in understanding, assessing, and taking action to solve the challenges created by COVID-19. However, nearly all organizations, large and small, still struggle to make data relevant to their work. Despite the value data provides, many organizations fail to harness its power to improve outcomes.

Part of this struggle stems from the “data divide” – the gap that exists between countries and organizations that have effective access to data to help them innovate and solve problems and those that do not. To close this divide, Microsoft launched the Open Data Campaign in 2020 to help realize the promise of more open data and data collaborations that drive innovation.

One of the key lessons we’ve learned from the Campaign and the work we’ve been doing with our partners, the Open Data Institute and The GovLab, is that the ability to access and use data to improve outcomes involves much more than technological tools and the data itself. It is also important to be able to leverage and share the experiences and practices that promote effective data collaboration and decision-making. This is especially true when it comes to working with governments, multi-lateral organizations, nonprofits, research institutions, and others who seek to open and reuse data to address important social issues, particularly those faced by developing countries.

Put another way, just having access to data and technology does not magically create value and improve outcomes. Making the most of open data and data collaboration requires thinking about how an organization’s leadership can commit to making data useful towards its mission, defining the questions it wants to answer with data, identifying the skills its team needs to use data, and determining how best to develop and establish trust among collaborators and communities served to derive more insight and benefit from data.

The Open Data for Social Impact Framework is a tool leaders can use to put data to work to solve the challenges most important to them. Recognizing that not all data can be made publicly accessible, we see the tremendous benefits that can come from advancing more open data, whether that takes shape as trusted data collaborations or truly open and public data. We use the phrase ‘social impact’ to mean a positive change towards addressing a societal problem, such as reducing carbon emissions, closing the broadband gap, building skills for jobs, and advancing accessibility and inclusion.

We believe in the limitless opportunities that opening, sharing, and collaborating around data can create to draw out new insights, make better decisions, and improve efficiencies when tackling some of the world’s most pressing challenges….(More)”.