Transparency of open data ecosystems in smart cities: Definition and assessment of the maturity of transparency in 22 smart cities


Paper by Martin Lnenicka et al: “This paper focuses on the issue of the transparency maturity of open data ecosystems seen as the key for the development and maintenance of sustainable, citizen-centered, and socially resilient smart cities. This study inspects smart cities’ data portals and assesses their compliance with transparency requirements for open (government) data. The expert assessment of 34 portals representing 22 smart cities, with 36 features, allowed us to rank them and determine their level of transparency maturity according to four predefined levels of maturity – developing, defined, managed, and integrated. In addition, recommendations for identifying and improving the current maturity level and specific features have been provided. An open data ecosystem in the smart city context has been conceptualized, and its key components were determined. Our definition considers the components of the data-centric and data-driven infrastructure using the systems theory approach. We have defined five predominant types of current open data ecosystems based on prevailing data infrastructure components. The results of this study should contribute to the improvement of current data ecosystems and build sustainable, transparent, citizen-centered, and socially resilient open data-driven smart cities…(More)”.

Governing AI to Advance Shared Prosperity


Chapter by Ekaterina Klinova: “This chapter describes a governance approach to promoting AI research and development that creates jobs and advances shared prosperity. Concerns over the labor-saving focus of AI advancement are shared by a growing number of economists, technologists, and policymakers around the world. They warn about the risk of AI entrenching poverty and inequality globally. Yet, translating those concerns into proactive governance interventions that would steer AI away from generating excessive levels of automation remains difficult and largely unattempted. Key causes of this difficulty arise from two types of sources: (1) insufficiently deep understanding of the full composition of factors giving AI R&D its present emphasis on labor-saving applications; and (2) lack of tools and processes that would enable AI practitioners and policymakers to anticipate and assess the impact of AI technologies on employment, wages and job quality. This chapter argues that addressing (2) will require creating worker-participatory means of differentiating between genuinely worker-benefiting AI and worker-displacing or worker-exploiting AI. To contribute to tackling (1), this chapter reviews AI practitioners’ motivations and constraints, such as relevant laws, market incentives, as well as less tangible but still highly influential constraining and motivating factors, including explicit and implicit norms in the AI field, visions of future societal order popular among the field’s members and ways that AI practitioners define goals worth pursuing and measure success. I highlight how each of these factors contributes meaningfully to giving AI advancement its excessive labor-saving emphasis and describe opportunities for governance interventions that could correct that over emphasis….(More)”.

Four ways we can use our collective imagination to improve how society works


Article by Geoff Mulgan: “In the first months of the pandemic there was evidence of a strong desire for transformational change in many countries. People wanted to use the crisis to deal with the big unresolved problems of climate change inequality and much more, encouraged, for example, by the very obvious truth that the most essential jobs were often amongst the lowest paid and lowest status. That everyone was affected by the pandemic seemed likely to fuel a more collective spirit, a recognition of how much our lives are intertwined with those of millions of strangers.

Now much of that energy has gone. People are exhausted, expectations have fallen and a return to normality looks acceptable, however inadequate that normality might have been. War in Ukraine has reminded us just how easily the world can go into retreat and that basic values remain under threat. My hope, though, is that as the pandemic fades from view we will return to our shared need for radical imagination about the future, and the transformations ahead.

I have long believed that we have a major problem with imagination: that we can more easily imagine ecological apocalypse or technological advances than improvements in how our society works: better options for health, welfare or neighbourhoods a generation or two from now.

Some of the reasons for this problem are objective. The majority of people no longer expect their children to be better off than them. They have good reasons for their pessimism: stagnant incomes for much of the population, particularly since the financial crisis. But the causes of this pessimism also lie with institutions – our universities have become better at commenting on or analysing the present than designing the future. Our political parties have largely given up on long-term thinking, while our social movements are generally better at arguing against things than proposing. Amazingly, there are now no media outlets that promote new ideas: magazines and newspapers focus instead on commentary.

One symptom of this is how much public debate, even in its progressive forms, is dominated by quite old ideas. Take, for example, the circular economy. The main ideas were first proposed in the 1980s. They guided many projects (including ones I worked on) in the 1990s, got the backing of the Chinese Communist party nearly twenty years ago, and were then ably evangelized by people like Ellen McArthur. Yet they’re still not wholly mainstream…(More)”.

Paradoxes of Media and Information Literacy


Open Access book by Jutta Haider, Olof Sundin: “Paradoxes of Media and Information Literacy contributes to ongoing conversations about control of knowledge and different ways of knowing. It does so by analysing why media and information literacy (MIL) is proposed as a solution for addressing the current information crisis.

Questioning why MIL is commonly believed to wield such power, the book throws into sharp relief several paradoxes that are built into common understandings of such literacies. Haider and Sundin take the reader on a journey across different fields of practice, research and policymaking, including librarianship, information studies, teaching and journalism, media and communication and the educational sciences. The authors also consider national information policy proposals and the recommendations of NGOs or international bodies, such as UNESCO and the OECD. Showing that MIL plays an active role in contemporary controversies, such as those on climate change or vaccination, Haider and Sundin argue that such controversies challenge existing notions of fact and ignorance, trust and doubt, and our understanding of information access and information control. The book thus argues for the need to unpack and understand the contradictions forming around these notions in relation to MIL, rather than attempting to arrive at a single, comprehensive definition.

Paradoxes of Media and Information Literacy combines careful analytical and conceptual discussions with an in-depth understanding of information practices and of the contemporary information infrastructure. It is essential reading for scholars and students engaged in library and information studies, media and communication, journalism studies and the educational sciences….(More)”.

Time to recognize authorship of open data


Nature Editorial: “At times, it seems there’s an unstoppable momentum towards the principle that data sets should be made widely available for research purposes (also called open data). Research funders all over the world are endorsing the open data-management standards known as the FAIR principles (which ensure data are findable, accessible, interoperable and reusable). Journals are increasingly asking authors to make the underlying data behind papers accessible to their peers. Data sets are accompanied by a digital object identifier (DOI) so they can be easily found. And this citability helps researchers to get credit for the data they generate.

But reality sometimes tells a different story. The world’s systems for evaluating science do not (yet) value openly shared data in the same way that they value outputs such as journal articles or books. Funders and research leaders who design these systems accept that there are many kinds of scientific output, but many reject the idea that there is a hierarchy among them.

In practice, those in powerful positions in science tend not to regard open data sets in the same way as publications when it comes to making hiring and promotion decisions or awarding memberships to important committees, or in national evaluation systems. The open-data revolution will stall unless this changes….

Universities, research groups, funding agencies and publishers should, together, start to consider how they could better recognize open data in their evaluation systems. They need to ask: how can those who have gone the extra mile on open data be credited appropriately?

There will always be instances in which researchers cannot be given access to human data. Data from infants, for example, are highly sensitive and need to pass stringent privacy and other tests. Moreover, making data sets accessible takes time and funding that researchers don’t always have. And researchers in low- and middle-income countries have concerns that their data could be used by researchers or businesses in high-income countries in ways that they have not consented to.

But crediting all those who contribute their knowledge to a research output is a cornerstone of science. The prevailing convention — whereby those who make their data open for researchers to use make do with acknowledgement and a citation — needs a rethink. As long as authorship on a paper is significantly more valued than data generation, this will disincentivize making data sets open. The sooner we change this, the better….(More)”.

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

How Democracies Spy on Their Citizens 


Ronan Farrow at the New Yorker: “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”…(More)”.

Research Handbook of Policy Design


Handbook edited by B. G. Peters and Guillaume Fontaine: “…The difference between policy design and policy making lies in the degree of encompassing consciousness involved in designing, which includes policy formulation, implementation and evaluation. Consequently there are differences in degrees of consciousness within the same kind of activity, from the simplest expression of “non-design”, which refers to the absence of clear intention or purpose, to “re-design”, which is the most common, incremental way to proceed, to “full design”, which suggests the attempt to control all the process by government or some other controlling actor. There are also differences in kind, from program design (at
the micro-level of intervention) to singular policy design, to meta-design when dealing with complex problems that require cross-sectorial coordination. Eventually, there are different forms or expressions (technical, political, ideological) and different patterns (transfer, innovation, accident or experiment) of policy design.
Unlike other forms of design, such as engineering or architecture, policy design exhibits specific features because of the social nature of policy targeting and modulation, which involves humans as objects and subjects with their values, conflicts, and other characteristics (Peters, 2018, p. 5). Thus, policy design is the attempt to integrate different understandings of a policy problem with different conceptions of the policy instruments to be utilized, and the different values according to which a government assess the outcomes pursued by this policy as expected, satisfactory, acceptable, and so forth. Those three components of design – causation, instruments and values – must then be combined to create a coherent plan for intervention. We will define this fourth component of design as “intervention”, meaning that there must be some strategic sense of how to make the newly designed policy work. This component requires not only an understanding of the specific policy being designed but also how that policy will mesh with the array of policies already operating. Thus, there is the need to think about some “meta-design” issues about coordination and coherence, as well as the usual challenges of implementation…(More)”.

From “democratic erosion” to “a conversation among equals”


Paper by Roberto Gargarella: “In recent years, legal and political doctrinaires have been confusing the democratic crisis that is affecting most of our countries with a mere crisis of constitutionalism (i.e., a crisis in the way our system of “checks and balances” works). Expectedly, the result of this “diagnostic error” is that legal and political doctrinaires began to propose the wrong remedies for the democratic crisis. Usually, they began advocating for the “restoration” of the old system of “internal controls” or “checks and balances”, without paying attention to the democratic aspects of the crisis that would require, instead, the strengthening of “popular” controls and participatory mechanisms that favored the gradual emergence of a “conversation among equals”. In this work, I focus my attention on certain institutional alternatives – citizens’ assemblies and the like- that may help us overcome the present democratic crisis. In particular, I examine the recent practice of citizens’ assemblies and evaluate their functioning…(More)”.

Digital Responsibility


Paper by Matthias Trier et al: “The transformative effects of digital technologies require researchers to understand the long-term consequences of the digital transformation process and to contribute to its design in a responsible way. This important challenge is addressed by the emerging concept of Digital Responsibility (DR). While the concept is increasingly recognized by political and organizational groups, the academic discussion is still not systematically evolving and the core elements of DR are not yet integrated into a coherent structured framework. This article presents a first systematic overview about the relevant levels of DR (personal, corporate and societal), its core principles and the key research themes for business & information systems researchers that relate to important questions of digital responsibility….(More)”.