Co-Production of Public Services and Outcomes

Book by Elke Loeffler: “This book examines user and community co-production of public services and outcomes, currently one of the most discussed topics in the field of public management and policy. It considers co-production in a wide range of public services, with particular emphasis on health, social care and community safety, illustrated through international case studies in many of the chapters. This book draws on both quantitative and qualitative empirical research studies on co-production, and on the Governance International database of more than 70 international co-production case studies, most of which have been republished by the OECD. Academically rigorous and systematically evidence-based, the book incorporates many insights which have arisen from the extensive range of research projects and executive training programmes in co-production undertaken by the author. Written in a style which is easy and enjoyable to read, the book gives readers, both academics and practitioners, the opportunity to develop a creative understanding of the essence and implications of co-production….(More)”.

Cyber Republic

Book by George Zarkadakis: “Around the world, liberal democracies are in crisis. Citizens have lost faith in their government; right-wing nationalist movements frame the political debate. At the same time, economic inequality is increasing dramatically; digital technologies have created a new class of super-rich entrepreneurs. Automation threatens to transform the free economy into a zero-sum game in which capital wins and labor loses. But is this digital dystopia inevitable? In Cyber Republic, George Zarkadakis presents an alternative, outlining a plan for using technology to make liberal democracies more inclusive and the digital economy more equitable. Cyber Republic is no less than a guide for the coming Fourth Industrial Revolution and the post-pandemic world.

Zarkadakis, an expert on technology and management, explains how artificial intelligence, together with intelligent robotics, sophisticated sensors, communication networks, and big data, will fundamentally reshape the global economy; a new “intelligent machine age” will force us to adopt new forms of economic and political organization. He envisions a future liberal democracy in which intelligent machines facilitate citizen assemblies, helping to extend citizen rights, and blockchains and cryptoeconomics enable new forms of democratic governance and business collaboration. Moreover, the same technologies can be applied to scientific research and technological innovation. We need not fear automation, Zarkadakis argues; in a post-work future, intelligent machines can collaborate with humans to achieve the human goals of inclusivity and equality….(More)”.

Ethical issues of crowdsourcing in education

Paper by Katerina Zdravkova: “Crowdsourcing has become a fruitful solution for many activities, promoting the joined power of the masses. Although not formally recognised as an educational model, the first steps towards embracing crowdsourcing as a form of formal learning and teaching have recently emerged. Before taking a dramatic step forward, it should be estimated whether it is feasible, sustainable and socially responsible.

A nice initiative, which intends to set a groundwork for responsible research and innovation and actively implement crowdsourcing for language learning of all citizens regardless of their diversified social, educational, and linguistic backgrounds is enetCollect.

In order to achieve these goals, a sound framework that embraces the ethical and legal considerations should be established. The framework is intended for all the current and prospective creators of crowd-oriented educational systems. It incorporates the ethical issues affecting the three stakeholders: collaborative content creators, prospective users, as well as the institutions intending to implement the approach for educational purposes. The proposed framework offers a practical solution intending to overcome the revealed barriers, which might increase the risk of compromising its main educational goals. If carefully designed and implemented, crowdsourcing might become a very helpful, and at the same time, a very reliable educational model….(More)”.

Automating Society Report 2020

Bertelsmann Stiftung: “When launching the first edition of this report, we decided to  call  it  “Automating  Society”,  as ADM systems  in  Europe  were  mostly  new, experimental,  and  unmapped  –  and,  above all, the exception rather than the norm.

This situation has changed rapidly. As clearly shown by over 100 use cases of automated decision-making systems in 16 European countries, which have been compiled by a research network for the 2020 edition of the Automating Society report by Bertelsmann Stiftung and AlgorithmWatch. The report shows: Even though algorithmic systems are increasingly being used by public administration and private companies, there is still a lack of transparency, oversight and competence.

The stubborn opacity surrounding the ever-increasing use of ADM systems has made it all the more urgent that we continue to increase our efforts. Therefore, we have added four countries (Estonia, Greece, Portugal, and Switzerland) to the 12 we already analyzed in the previous edition of this report, bringing the total to 16 countries. While far from exhaustive, this allows us to provide a broader picture of the ADM scenario in Europe. Considering the impact these systems may have on everyday life, and how profoundly they challenge our intuitions – if not our norms and rules – about the relationship between democratic governance and automation, we believe this is an essential endeavor….(More)”.

Algorithm Tips

About: “Algorithm Tips is here to help you start investigating algorithmic decision-making power in society.

This site offers a database of leads which you can search and filter. It’s a curated set of algorithms being used across the US government at the federal, state, and local levels. You can subscribe to alerts for when new algorithms matching your interests are found. For details on our curation methodology see here.

We also provide resources such as example investigations, methodological tips, and guidelines for public records requests related to algorithms.

Finally, we blog about some of the more interesting examples of algorithms we’ve uncovered in our research….(More)”.

Data to Go: The Value of Data Portability as a Means to Data Liquidity

Juliet McMurren and Stefaan G. Verhulst at Data & Policy: “If data is the “new oil,” why isn’t it flowing? For almost two decades, data management in fields such as government, healthcare, finance, and research has aspired to achieve a state of data liquidity, in which data can be reused where and when it is needed. For the most part, however, this aspiration remains unrealized. The majority of the world’s data continues to stagnate in silos, controlled by data holders and inaccessible to both its subjects and others who could use it to create or improve services, for research, or to solve pressing public problems.

Efforts to increase liquidity have focused on forms of voluntary institutional data sharing such as data pools or other forms of data collaboratives. Although useful, these arrangements can only advance liquidity so far. Because they vest responsibility and control over liquidity in the hands of data holders, their success depends on data holders’ willingness and ability to provide access to their data for the greater good. While that willingness exists in some fields, particularly medical research, a willingness to share data is much less likely where data holders are commercial competitors and data is the source of their competitive advantage. And even where willingness exists, the ability of data holders to share data safely, securely, and interoperably may not. Without a common set of secure, standardized, and interoperable tools and practices, the best that such bottom-up collaboration can achieve is a disconnected patchwork of initiatives, rather than the data liquidity proponents are seeking.

Image for post

Data portability is one potential solution to this problem. As enacted in the EU General Data Protection Regulation (2018) and the California Consumer Privacy Act (2018), the right to data portability asserts that individuals have a right to obtain, copy, and reuse their personal data and transfer it between platforms or services. In so doing, it shifts control over data liquidity to data subjects, obliging data holders to release data whether or not it is in their commercial interests to do so. Proponents of data portability argue that, once data is unlocked and free to move between platforms, it can be combined and reused in novel ways and in contexts well beyond those in which it was originally collected, all while enabling greater individual control.

To date, however, arguments for the benefits of the right to data portability have typically failed to connect this rights-based approach with the larger goal of data liquidity and how portability might advance it. This failure to connect these principles and to demonstrate their collective benefits to data subjects, data holders, and society has real-world consequences. Without a clear view of what can be achieved, policymakers are unlikely to develop interventions and incentives to advance liquidity and portability, individuals will not exercise their rights to data portability, and industry will not experiment with use cases and develop the tools and standards needed to make portability and liquidity a reality.

Toward these ends, we have been exploring the current literature on data portability and liquidity, searching for lessons and insights into the benefits that can be unlocked when data liquidity is enabled through the right to data portability. Below we identify some of the greatest potential benefits for society, individuals, and data-holding organizations. These benefits are sometimes in conflict with one another, making the field a contentious one that demands further research on the trade-offs and empirical evidence of impact. In the final section, we also discuss some barriers and challenges to achieving greater data liquidity….(More)”.

Using Data and Respecting Users

“Three technical and legal approaches that create value from data and foster user trust” by Marshall Van Alstyne and Alisa Dagan Lenart: “Transaction data is like a friendship tie: both parties must respect the relationship and if one party exploits it the relationship sours. As data becomes increasingly valuable, firms must take care not to exploit their users or they will sour their ties. Ethical uses of data cover a spectrum: at one end, using patient data in healthcare to cure patients is little cause for concern. At the other end, selling data to third parties who exploit users is a serious cause for concern. Between these two extremes lies a vast gray area where firms need better ways to frame data risks and rewards in order to make better legal and ethical choices. This column provides a simple framework and threeways to respectfully improve data use….(More)”

Statistical illiteracy isn’t a niche problem. During a pandemic, it can be fatal

Article by Carlo Rovelli: “In the institute where I used to work a few years ago, a rare non-infectious illness hit five colleagues in quick succession. There was a sense of alarm, and a hunt for the cause of the problem. In the past the building had been used as a biology lab, so we thought that there might be some sort of chemical contamination, but nothing was found. The level of apprehension grew. Some looked for work elsewhere.

One evening, at a dinner party, I mentioned these events to a friend who is a mathematician, and he burst out laughing. “There are 400 tiles on the floor of this room; if I throw 100 grains of rice into the air, will I find,” he asked us, “five grains on any one tile?” We replied in the negative: there was only one grain for every four tiles: not enough to have five on a single tile.

We were wrong. We tried numerous times, actually throwing the rice, and there was always a tile with two, three, four, even five or more grains on it. Why? Why would grains “flung randomly” not arrange themselves into good order, equidistant from each other?

Because they land, precisely, by chance, and there are always disorderly grains that fall on tiles where others have already gathered. Suddenly the strange case of the five ill colleagues seemed very different. Five grains of rice falling on the same tile does not mean that the tile possesses some kind of “rice-­attracting” force. Five people falling ill in a workplace did not mean that it must be contaminated. The institute where I worked was part of a university. We, know-­all professors, had fallen into a gross statistical error. We had become convinced that the “above average” number of sick people required an explanation. Some had even gone elsewhere, changing jobs for no good reason.

Life is full of stories such as this. Insufficient understanding of statistics is widespread. The current pandemic has forced us all to engage in probabilistic reasoning, from governments having to recommend behaviour on the basis of statistical predictions, to people estimating the probability of catching the virus while taking part in common activities. Our extensive statistical illiteracy is today particularly dangerous.

We use probabilistic reasoning every day, and most of us have a vague understanding of averages, variability and correlations. But we use them in an approximate fashion, often making errors. Statistics sharpen and refine these notions, giving them a precise definition, allowing us to reliably evaluate, for instance, whether a medicine or a building is dangerous or not.

Society would gain significant advantages if children were taught the fundamental ideas of probability theory and statistics: in simple form in primary school, and in greater depth in secondary school….(More)”.

Understanding Bias in Facial Recognition Technologies

Paper by David Leslie: “Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities.

Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring.

In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead distributional and recognitional injustices. I also describe how certain complacent attitudes of innovators and users toward redressing these harms raise serious concerns about expanding future adoption. The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies….(More)”.

Surveillance in South Africa: From Skin Branding to Digital Colonialism

Paper by Michael Kwet: “South Africa’s long legacy of racism and colonial exploitation continues to echo throughout post-apartheid society. For centuries, European conquerors marshaled surveillance as a means to control the black population. This began with the requirements for passes to track and control the movements, settlements, and labor of Africans. Over time, surveillance technologies evolved alongside complex shifts in power, culture, and the political economy.

This Chapter explores the evolution of surveillance regimes in South Africa. The first surveillance system in South Africa used paper passes to police slave movements and enforce labor contracts. To make the system more robust, various white authorities marked the skin of workers and livestock with symbols registered in paper databases. At the beginning of the twentieth century, fingerprinting was introduced in some areas to simplify and improve the passes. Under apartheid, the National Party aimed to streamline a national, all-seeing surveillance system. They imported computers to impose a regime of fixed race classification and keep detailed records about the African population. The legal apparatus of race-based surveillance was finally abolished during the transition to democracy. However, today a regime of Big Data, artificial intelligence, and centralized cloud computing has ushered in a new era of mass surveillance in South Africa.

South Africa’s surveillance regimes were always devised in collaboration with foreign colonizers, imperialists, intellectuals, and profit-seeking capitalists. In each era, the United States increased its participation. During the period of settler conquest, the US had a modest presence in Southern Africa. With the onset of the minerals revolution, US power expanded, and American capitalists and engineers with business interests in the mines pushed for an improved pass system to police African workers. Under apartheid, US corporations supplied the computer technology essential to apartheid governance and business enterprise. Finally, during the latter years of post-apartheid, Silicon Valley corporations, together with US surveillance agencies, began imposing surveillance capitalism on South African society. A new form of domination, digital colonialism, has emerged, vesting the United States with unprecedented control over South African affairs. To counter the force of digital colonialism, a new movement may emerge to push to redesign the digital ecosystem as a socialist commons based on open technology, socialist legal solutions, bottom-up democracy, and Internet decentralization….(More).”