World stumbling zombie-like into a digital welfare dystopia, warns UN human rights expert


UN Press Release: “A UN human rights expert has expressed concerns about the emergence of the “digital welfare state”, saying that all too often the real motives behind such programs are to slash welfare spending, set up intrusive government surveillance systems and generate profits for private corporate interests.

“As humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia,” the Special Rapporteur on extreme poverty and human rights, Philip Alston, says in a report to be presented to the General Assembly on Friday.

The digital welfare state is commonly presented as an altruistic and noble enterprise designed to ensure that citizens benefit from new technologies, experience more efficient government, and enjoy higher levels of well-being. But, Alston said, the digitization of welfare systems has very often been used to promote deep reductions in the overall welfare budget, a narrowing of the beneficiary pool, the elimination of some services, the introduction of demanding and intrusive forms of conditionality, the pursuit of behavioural modification goals, the imposition of stronger sanctions regimes, and a complete reversal of the traditional notion that the state should be accountable to the individual….(More)”.

Merging the ‘Social’ and the ‘Public’: How Social Media Platforms Could Be a New Public Forum


Paper by Amélie Pia Heldt: “When Facebook and other social media sites announced in August 2018 they would ban extremist speakers such as conspiracy theorist Alex Jones for violating their rules against hate speech, reactions were strong. Either they would criticize that such measures were only a drop in the bucket with regards to toxic and harmful speech online, or they would despise Facebook & Co. for penalizing only right-wing speakers, hence censoring political opinions and joining some type of anti-conservative media conglomerate. This anecdote foremost begged the question: Should someone like Alex Jones be excluded from Facebook? And the question “should” includes the one of “may Facebook exclude users for publishing political opinions?”.

As social media platforms take up more and more space in our daily lives, enabling not only individual and mass communication, but also offering payment and other services, there is still a need for a common understanding with regards to the social and communicative space they create in cyberspace. By common I mean on a global scale since this is the way most social media platforms operate or aim for (see Facebook’s mission statement: “bring the world closer together”). While in social science a new digital sphere was proclaimed and social media platforms can be categorized as “personal publics”, there is no such denomination in legal scholarship that is globally agreed upon. Public space can be defined as a free room between the state and society, as a space for freedom. Generally, it is where individuals are protected by their fundamental rights while operating in the public sphere. However, terms like forum, space, and sphere may not be used as synonyms in this discussion. Under the First Amendment, the public forum doctrine mainly serves the purposes of democracy and truth and could be perpetuated in communication services that promote direct dialogue between the state and citizens. But where and by whom is the public forum guaranteed in cyberspace? The notion of the public space in cyberspace is central and it constantly evolves as platforms become broader in their services, hence it needs to be examined more closely. When looking at social media platforms we need to take into account how they moderate speech and subsequently how they influence social processes. If representative democracies are built on the grounds of deliberation, it is essential to safeguard the room for public discourse to actually happen. Are constitutional concepts for the analog space transferable into the digital? Should private actors such as social media platforms be bound by freedom of speech without being considered state actors? And, accordingly, create a new type of public forum?

The goal of this article is to provide answers to the questions mentioned….(More)”.

Human Rights in the Age of Platforms


Book edited by Rikke Frank Jørgensen: “Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.

The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation…(More)”.

Ethical guidelines issued by engineers’ organization fail to gain traction


Blogpost by Nicolas Kayser-Bril: “In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”

Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.

Transparency and accountability

Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You’re Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.

Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.

AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came…(More)”

Three Big Things: The Most Important Forces Shaping the World


Essay by Morgan Housel: “An irony of studying history is that we often know exactly how a story ends, but have no idea where it began…

Nothing is as influential as World War II has been. But there are a few other Big Things worth paying attention to, because they’re the root influencer of so many other topics.

The three big ones that stick out are demographics, inequality, and access to information.

There are hundreds of forces shaping the world not mentioned here. But I’d argue that many, even most, are derivatives of those three.

Each of these Big Things will have a profound impact on the coming decades because they’re both transformational and ubiquitous. They impact nearly everyone, albeit in different ways. With that comes the reality that we don’t know exactly how their influence will unfold. No one in 1945 knew exactly how World War II would go on to shape the world, only that it would in extreme ways. But we can guess some of the likeliest changes.

Essay by Morgan Housel: “An irony of studying history is that we often know exactly how a story ends, but have no idea where it began…

3. Access to information closes gaps that used to create a social shield of ignorance.

Carole Cole disappeared in 1970 after running away from a juvenile detention center in Texas. She was 17.

A year later an unidentified murdered body was found in Louisiana. It was Carole, but Louisiana police had no idea. They couldn’t identify her. Carole’s disappearance went cold, as did the unidentified body.

Thirty-four years later Carole’s sister posted messages on Craigslist asking for clues into her sister’s disappearance. At nearly the same time, a sheriff’s department in Louisiana made a Facebook page asking for help identifying the Jane Doe body found 34 years before.

Six days later, someone connected the dots between the two posts.

What stumped detectives for almost four decades was solved by Facebook and Craigslist in less than a week.

This kind of stuff didn’t happen even 10 years ago. And we probably haven’t awoken to its full potential – good and bad.

The greatest innovation of the last generation has been the destruction of information barriers that used to keep strangers isolated from one another…(More)”

Why Trust Science?


Book by Naomi Oreskes: “Do doctors really know what they are talking about when they tell us vaccines are safe? Should we take climate experts at their word when they warn us about the perils of global warming? Why should we trust science when our own politicians don’t? In this landmark book, Naomi Oreskes offers a bold and compelling defense of science, revealing why the social character of scientific knowledge is its greatest strength—and the greatest reason we can trust it.

Tracing the history and philosophy of science from the late nineteenth century to today, Oreskes explains that, contrary to popular belief, there is no single scientific method. Rather, the trustworthiness of scientific claims derives from the social process by which they are rigorously vetted. This process is not perfect—nothing ever is when humans are involved—but she draws vital lessons from cases where scientists got it wrong. Oreskes shows how consensus is a crucial indicator of when a scientific matter has been settled, and when the knowledge produced is likely to be trustworthy.

Based on the Tanner Lectures on Human Values at Princeton University, this timely and provocative book features critical responses by climate experts Ottmar Edenhofer and Martin Kowarsch, political scientist Jon Krosnick, philosopher of science Marc Lange, and science historian Susan Lindee, as well as a foreword by political theorist Stephen Macedo….(More)”.

Individualism and Governance of the Commons


Paper by Meina Cai et al: “Individualistic cultures are associated with economic growth and development. Do they also improve governance of the commons? According to the property rights literature, conservation is more likely when the institutions of property arise from a spontaneous process in response to local problems. We argue that individualistic cultures contribute to conservation by encouraging property rights entrepreneurship: efforts by individuals and communities to resolve commons dilemmas, including their investment of resources in securing political recognition of spontaneously arising property rights. We use the theory to explain cross-country rates of change in forest cover. Using both subjective measures of individualistic values and the historical prevalence of disease as instruments for individualism, we find that individualistic societies have higher reforestation rates than collectivist ones, consistent with our theory…(More)”.

Big Data Analytics in Healthcare


Book edited by Anand J. Kulkarni, Patrick Siarry, Pramod Kumar Singh, Ajith Abraham, Mengjie Zhang, Albert Zomaya and Fazle Baki: “This book includes state-of-the-art discussions on various issues and aspects of the implementation, testing, validation, and application of big data in the context of healthcare. The concept of big data is revolutionary, both from a technological and societal well-being standpoint. This book provides a comprehensive reference guide for engineers, scientists, and students studying/involved in the development of big data tools in the areas of healthcare and medicine. It also features a multifaceted and state-of-the-art literature review on healthcare data, its modalities, complexities, and methodologies, along with mathematical formulations.

The book is divided into two main sections, the first of which discusses the challenges and opportunities associated with the implementation of big data in the healthcare sector. In turn, the second addresses the mathematical modeling of healthcare problems, as well as current and potential future big data applications and platforms…(More)”.

Identifying Citizens’ Needs by Combining Artificial Intelligence (AI) and Collective Intelligence (CI)


Report by Andrew Zahuranec, Andrew Young and Stefaan G. Verhulst: “Around the world, public leaders are seeking new ways to better understand the needs of their citizens, and subsequently improve governance, and how we solve public problems. The approaches proposed toward changing public engagement tend to focus on leveraging two innovations. The first involves artificial intelligence (AI), which offers unprecedented abilities to quickly process vast quantities of data to deepen insights into public needs. The second is collective intelligence (CI), which provides means for tapping into the “wisdom of the crowd.” Both have strengths and weaknesses, but little is known on how the combination of both could address their weaknesses while radically transform how we meet public demands for more responsive governance.

Today, The GovLab is releasing a new report, Identifiying Citizens’ Needs By Combining AI and CI, which seeks to identify and assess how institutions might responsibly experiment in how they engage with citizens by leveraging AI and CI together.

The report, authored by Stefaan G. Verhulst, Andrew J. Zahuranec, and Andrew Young, builds upon an initial examination of the intersection of AI and CI conducted in the context of the MacArthur Foundation Research Network on Opening Governance. …

The report features five in-depth case studies and an overview of eight additional examples from around the world on how AI and CI together can help to: 

  • Anticipate citizens’ needs and expectations through cognitive insights and process automation and pre-empt problems through improved forecasting and anticipation;
  • Analyze large volumes of citizen data and feedback, such as identifying patterns in complaints;
  • Allow public officials to create highly personalized campaigns and services; or
  • Empower government service representatives to deliver relevant actions….(More)”.

Five Ethical Principles for Humanitarian Innovation


Peter Batali, Ajoma Christopher & Katie Drew in the Stanford Social Innovation Review: “…Based on this experience, UNHCR and CTEN developed a pragmatic, refugee-led, “good enough” approach to experimentation in humanitarian contexts. We believe a wide range of organizations, including grassroots community organizations and big-tech multinationals, can apply this approach to ensure that the people they aim to help hold the reigns of the experimentation process.

1. Collaborate Authentically and Build Intentional Partnerships

Resource and information asymmetry are inherent in the humanitarian system. Refugees have long been constructed as “‘victims”’ in humanitarian response, waiting for “salvation” from heroic humanitarians. Researcher Matthew Zagor describes this construct as follows: “The genuine refugee … is the passive, coerced, patient refugee, the one waiting in the queue—the victim, anticipating our redemptive touch, defined by the very passivity which in our gaze both dehumanizes them, in that they lack all autonomy in our eyes, and romanticizes them as worthy in their potentiality.”

Such power dynamics make authentic collaboration challenging….

2. Avoid Technocratic Language

Communication can divide us or bring us together. Using exclusive or “expert” terminology (terms like “ideation,” “accelerator,” and “design thinking”) or language that reinforces power dynamics or assigns an outsider role (such as “experimenting on”) can alienate community participants. Organizations should aim to use inclusive language than everyone understands, as well as set a positive and realistic tone. Communication should focus on the need to co-develop solutions with the community, and the role that testing or trying something new can play….

3. Don’t Assume Caution Is Best

Research tells us that we feel more regret over actions that lead to negative outcomes than we do over inactions that lead to the same or worse outcomes. As a result, we tend to perceive and weigh action and inaction unequally. So while humanitarian organizations frequently consider the implications of our actions and the possible negative outcome for communities, we don’t always consider the implications of doing nothing. Is it ethical to continue an activity that we know isn’t as effective as it could be, when testing small and learning fast could reap real benefits? In some cases, taking a risk might, in fact, be the least risky path of action. We need to always ask ourselves, “Is it really ethical to do nothing?”…

4. Choose Experiment Participants Based on Values

Many humanitarian efforts identify participants based on their societal role, vulnerability, or other selection criteria. However, these methods often lead to challenges related to incentivization—the need to provide things like tea, transportation, or cash payments to keep participants engaged. Organizations should instead consider identifying participants who demonstrate the values they hope to promote—such as collaboration, transparency, inclusivity, or curiosity. These community members are well-poised to promote inclusivity, model positive behaviors, and engage participants across the diversity of your community….

5. Monitor Community Feedback and Adapt

While most humanitarian agencies know they need to listen and adapt after establishing communication channels, the process remains notoriously challenging. One reason is that community members don’t always share their feedback on experimentation formally; feedback sometimes comes from informal channels or even rumors. Yet consistent, real-time feedback is essential to experimentation. Listening is the pressure valve in humanitarian experimentation; it allows organizations to adjust or stop an experiment if the community flags a negative outcome….(More)”.