Booklet by Mimi Onuoha and Diana Nucera: “..this booklet aims to fill the gaps in information about AI by creating accessible materials that inform communities and allow them to identify what their ideal futures with AI can look like. Although the contents of this booklet focus on demystifying AI, we find it important to state that the benefits of any technology should be felt by all of us. Too often, the challenges presented by new technology spell out yet another tale of racism, sexism, gender inequality, ableism, and lack of consent within digital culture.
The path to a fair future starts with the humans behind the machines, not the machines themselves. Self-reflection and a radical transformation of our relationships to our environment and each other are at the heart of combating structural inequality. But understanding what it takes to create a fair and just society is the first step. In creating this booklet, we start from the belief that equity begins with education…For those who wish to learn more about specific topics, we recommend looking at the table of contents and choosing sections to read. For more hands-on learners, we have also included a number of workbook activities that allow the material to be explored in a more active fashion.
We hope that this booklet inspires and informs those who are developing emerging technologies to reflect on how these technologies can impact our societies. We also hope that this booklet inspires and informs black, brown, indigenous, and immigrant communities to reclaim technology as a tool of liberation…(More)”.
Danny Lämmerhirt at Open Knowledge Foundation: “Citizen-generated data (CGD) expands what gets measured, how, and for what purpose. As the collection and engagement with CGD increases in relevance and visibility, public institutions can learn from existing initiatives about what CGD initiatives do, how they enable different forms of sense-making and how this may further progress around the Sustainable Development Goals.
Our report, as well as a guide for governments (find the layouted version here, as well as a living document here) shall help start conversations around the different approaches of doing and organising CGD. When CGD becomes good enough depends on the purpose it is used for but also how CGD is situated in relation to other data.
As our work wishes to be illustrative rather than comprehensive, we started with a list of over 230 projects that were associated with the term “citizen-generated data” on Google Search, using an approach known as “search as research” (Rogers, 2013). Outgoing from this list, we developed case studies on a range of prominent CGD examples.
The report identifies several benefits CGD can bring for implementing and monitoring the SDGs, underlining the importance for public institutions to further support these initiatives.
Figure 1: Illustration of tasks underpinning CGD initiatives and their workflows
Key findings:
Dealing with data is usually much more than ‘just producing’ data. CGD initiativesopen up new types of relationships between individuals, civil society and public institutions. This includes local development and educational programmes, community outreach, and collaborative strategies for monitoring, auditing, planning and decision-making.
Generating data takes many shapes, from collecting new data in the field, to compiling, annotating, and structuring existing data to enable new ways of seeing things through data. Accessing and working with existing (government) data is often an important enabling condition for CGD initiatives to start in the first place.
CGD initiatives can help gathering data in regions otherwise not reachable. Some CGD approaches may provide updated and detailed data at lower costs and faster than official data collections.
Beyond filling data gaps, official measurements can be expanded, complemented, or cross-verified. This includes pattern and trend identification and the creation of baseline indicators for further research. CGD can help governments detect anomalies, test the accuracy of existing monitoring processes, understand the context around phenomena, and initiate its own follow-up data collections.
CGD can inform several actions to achieve the SDGs. Beyond education, community engagement and community-based problem solving, this includes baseline research, planning and strategy development, allocation and coordination of public and private programs, as well as improvement to public services.
CGD must be ‘good enough’ for different (and varying) purposes. Governments already develop pragmatic ways to negotiate and assess the usefulness of data for a specific task. CGD may be particularly useful when agencies have a clear remit or responsibility to manage a problem.
Data quality can be comparable to official data collections, provided tasks are sufficiently easy to conduct, tool quality is high enough, and sufficient training, resources and quality assurance are provided….(More)”.
“Polls suggest that governments across the world face high levels of citizen dissatisfaction, and low levels of citizen trust. The 2017 Edelman Trust Barometer found, for instance, that only 43% of those surveyed trust Canada’s government. Only 15% of those surveyed trust government in South Africa, and levels are low in other countries too—including Brazil (at 24%), South Korea (28%), the United Kingdom (36%), Australia, Japan, and Malaysia (37%), Germany (38%), Russia (45%), and the United States (47%). Similar surveys find trust in government averaging only 40-45% across member countries of the Organization for Economic Cooperation and Development (OECD), and suggest that as few as 31% and 32% of Nigerians and Liberians trust government.
There are many reasons why trust in government is deficient in so many countries, and these reasons differ from place to place. One common factor across many contexts, however, is a lack of confidence that governments can or will address key policy challenges faced by citizens.
Studies show that this confidence deficiency stems from citizen observations or experiences with past public policy failures, which promote jaundiced views of their public officials’ capabilities to deliver. Put simply, citizens lose faith in government when they observe government failing to deliver on policy promises, or to ‘get things done’. Incidentally, studies show that public officials also often lose faith in their own capabilities (and those of their organizations) when they observe, experience or participate in repeated policy implementation failures. Put simply, again, these public officials lose confidence in themselves when they repeatedly fail to ‘get things done’.
I call the ‘public policy futility’ trap—where past public policy failure leads to a lack of confidence in the potential of future policy success, which feeds actual public policy failure, which generates more questions of confidence, in a vicious self fulfilling prophecy. I believe that many governments—and public policy practitioners working within governments—are caught in this trap, and just don’t believe that they can muster the kind of public policy responses needed by their citizens.
Along with my colleagues at the Building State Capability (BSC) program, I believe that many policy communities are caught in this trap, to some degree or another. Policymakers in these communities keep coming up with ideas, and political leaders keep making policy promises, but no one really believes the ideas will solve the problems that need solving or produce the outcomes and impacts that citizens need. Policy promises under such circumstances center on doing what policymakers are confident they can actually implement: like producing research and position papers and plans, or allocating inputs toward the problem (in a budget, for instance), or sponsoring visible activities (holding meetings or engaging high profile ‘experts’ for advice), or producing technical outputs (like new organizations, or laws). But they hold back from promising real solutions to real problems, as they know they cannot really implement them (given past political opposition, perhaps, or the experience of seemingly interactable coordination challenges, or cultural pushback, and more)….(More)”.
Paper by Klievink, Bram, van der Voort, Haiko and Veeneman, Wijnand: “Driven by the technological capabilities that ICTs offer, data enable new ways to generate value for both society and the parties that own or offer the data. This article looks at the idea of data collaboratives as a form of cross-sector partnership to exchange and integrate data and data use to generate public value. The concept thereby bridges data-driven value creation and collaboration, both current themes in the field.
To understand how data collaboratives can add value in a public governance context, we exploratively studied the qualitative longitudinal case of an infomobility platform. We investigated the ability of a data collaborative to produce results while facing significant challenges and tensions between the goals of parties, each having the conflicting objectives of simultaneously retaining control whilst allowing for generativity. Taken together, the literature and case study findings help us to understand the emergence and viability of data collaboratives. Although limited by this study’s explorative nature, we find that conditions such as prior history of collaboration and supportive rules of the game are key to the emergence of collaboration. Positive feedback between trust and the collaboration process can institutionalise the collaborative, which helps it survive if conditions change for the worse….(More)”.
Book by Daniel Neyland: “This open access book begins with an algorithm–a set of IF…THEN rules used in the development of a new, ethical, video surveillance architecture for transport hubs. Readers are invited to follow the algorithm over three years, charting its everyday life. Questions of ethics, transparency, accountability and market value must be grasped by the algorithm in a series of ever more demanding forms of experimentation. Here the algorithm must prove its ability to get a grip on everyday life if it is to become an ordinary feature of the settings where it is being put to work. Through investigating the everyday life of the algorithm, the book opens a conversation with existing social science research that tends to focus on the power and opacity of algorithms. In this book we have unique access to the algorithm’s design, development and testing, but can also bear witness to its fragility and dependency on others….(More)”.
Cesar Hidalgo at Scientific American: “Nearly 30 years ago, Paul Romer published a paper exploring the economic value of knowledge. In that paper, he argued that, unlike the classical factors of production (capital and labor), knowledge was a “non-rival good.” This meant that it could be shared infinitely, and thus, it was the only thing that could grow in per-capita terms.
Romer’s work was recently recognized with the Nobel Prize, even though it was just the beginning of a longer story. Knowledge could be infinitely shared, but did that mean it could go everywhere? Soon after Romer’s seminal paper, Adam Jaffe, Manuel Trajtenberg and Rebecca Henderson published a paper on the geographic diffusion of knowledge. Using a statistical technique called matching, they identified a “twin” for each patent (that is, a patent filed at the same time and making similar technological claims).
Then, they compared the citations received by each patent and its twin. Compared to their twins, patents received almost four more citations from other patents originating in the same city than those originating elsewhere. Romer was right in that knowledge could be infinitely shared, but also, knowledge had difficulties travelling far….
What will the study of knowledge bring us next? Will we get to a point at which we will measure Gross Domestic Knowledge as accurately as we measure Gross Domestic Product? Will we learn how to engineer knowledge diffusion? Will knowledge continue to concentrate in cities? Or will it finally break the shackles of society and spread to every corner of the world? The only thing we know for sure is that the study of knowledge is an exciting journey. The lowest hanging fruit may have already been picked, but the tree is still filled with fruits and flavors. Let’s climb it and explore….(More)”
Paper by Cass R. Sunstein: “In 2015, the United States government imposed 9.78 billion hours of paperwork burdens on the American people. Many of these hours are best categorized as “sludge,” reducing access to important licenses, programs, and benefits. Because of the sheer costs of sludge, rational people are effectively denied life-changing goods and services; the problem is compounded by the existence of behavioral biases, including inertia, present bias, and unrealistic optimism. In principle, a serious deregulatory effort should be undertaken to reduce sludge, through automatic enrollment, greatly simplified forms, and reminders. At the same time, sludge can promote legitimate goals.
First, it can protect program integrity, which means that policymakers might have to make difficult tradeoffs between (1) granting benefits to people who are not entitled to them and (2) denying benefits to people who are entitled to them. Second, it can overcome impulsivity, recklessness, and self-control problems. Third, it can prevent intrusions on privacy. Fourth, it can serve as a rationing device, ensuring that benefits go to people who most need them. In most cases, these defenses of sludge turn out to be more attractive in principle than in practice.
For sludge, a form of cost-benefit analysis is essential, and it will often argue in favor of a neglected form of deregulation: sludge reduction. For both public and private institutions,“Sludge Audits” should become routine. Various suggestions are offered for new action by the Office of Information and Regulatory Affairs, which oversees the Paperwork Reduction Act; for courts; and for Congress…(More)”.
Prediction by Geoff Mulgan, Eva Grobbink and Vincent Straub: “The USSR’s launch of the Sputnik 1 satellite in 1958 was a major psychological blow to the United States. The US had believed it was technologically far ahead of its rival, but was confronted with proof that the USSR was pulling ahead in some fields. After a bout of soul-searching the country responded with extraordinary vigour, massively increasing investment in space technologies and promising to put a man on the Moon by the end of the 1960s.
In 2019, China’s success in smart cities could prompt a similar “Sputnik Moment” for the rest of the world. It may not be as dramatic as that of 1958. But unlike beeping satellites and Moon landings, it could be coming to a town near you….
The concept of a “smart city” has been around for several decades, often associated with hype, grandiose failures, and an overemphasis on hardware rather than people (Nesta has previously written on how we can rethink smart cities and ensure digital innovation realises the potential of technology and people). But various technologies are now coming of age which bring the vision of a smart city closer to fruition. China is in the forefront, investing heavily in sensors and infrastructures, and its ET City Brain project shows just how far the country’s thinking has progressed.
First launched in September 2016, ET City Brain is a collaboration between Chinese technology giant Alibaba and several cities. It was first trialled in Hangzhou, the hometown of Alibaba’s executive chairman, Jack Ma, but has since expanded to other Chinese cities. Earlier this year, Kuala Lumpurbecame the first city outside of China to import the ET City Brain model.
The ET City Brain system gathers large amounts of data (including logs, videos, and data stream) from sensors. These are then processed by algorithms in supercomputers and fed back into control centres around the city for administrators to act on—in some cases, automation means the system works without any human intervention at all.
So far, the project has been used to monitor congestion in Hangzhou, improve the response of emergency services in Guangzhou, and detect traffic accidents in Suzhou. In Hangzhou, Alibaba was given control of 104 traffic light junctions in the city’s Xiaoshan district and tasked with managing traffic flows. By combining mass video surveillance with live data from public transportation systems, ET City Brain was able to autonomously change traffic lights so that emergency vehicles could travel to accident scenes without interruption. As a result, arrival times for ambulances improved by 49 percent….(More)”.
Peter Andras et al in IEEE Technology and Society Magazine: “Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind’s Alpha Go Zero) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines?…(More)”.
Blog Post by Beth Noveck: “…The Inter-American Development Bank (IADB) has published an important, practical and prescriptive report with recommendations for every sector of society from government to individuals on innovative and effective approaches to combatting corruption. While focused on Latin America, the report’s proposals, especially those on the application of new technology in the fight against corruption, are relevant around the world….
The recommendations about the use of new technologies, including big data, blockchain and collective intelligence, are drawn from an effort undertaken last year by the Governance Lab at New York University’s Tandon School of Engineering to crowdsource such solutions and advice on how to implement them from a hundred global experts. (See the Smarter Crowdsourcing against Corruption report here.)…
Big data, when published as open data, namely in a form that can be re-used without legal or technical restriction and in a machine-readable format that computers can analyze, is another tool in the fight against corruption. With machine readable, big and open data, those outside of government can pinpoint and measure irregularities in government contracting, as Instituto Observ is doing in Brazil.
Opening up judicial data, such as information about case processing times, judges’ and prosecutors’ salaries, information about selection processes, such as CV’s, professional and academic backgrounds, and written and oral exam scores provides activists and reformers with the tools to fight judicial corruption. The Civil Association for Equality and Justice (ACIJ) (a non-profit advocacy group) in Argentina uses such open justice data in its Concursos Transparentes (Transparent Contests) to fight judicial corruption. Jusbrasil is a private open justice company also using open data to reform the courts in Brazil….(More)”