Common-Knowledge Attacks on Democracy


Paper by Henry Farrell and Bruce Schneier:  “Existing approaches to cybersecurity emphasize either international state-to-state logics (such as deterrence theory) or the integrity of individual information systems. Neither provides a good understanding of new “soft cyber” attacks that involve the manipulation of expectations and common understandings. We argue that scaling up computer security arguments to the level of the state, so that the entire polity is treated as an information system with associated attack surfaces and threat models, provides the best immediate way to understand these attacks and how to mitigate them.

We demonstrate systematic differences between how autocracies and democracies work as information systems, because they rely on different mixes of common and contested political knowledge. Stable autocracies will have common knowledge over who is in charge and their associated ideological or policy goals, but will generate contested knowledge over who the various political actors in society are, and how they might form coalitions and gain public support, so as to make it more difficult for coalitions to displace the regime. Stable democracies will have contested knowledge over who is in charge, but common knowledge over who the political actors are, and how they may form coalitions and gain public support. These differences are associated with notably different attack surfaces and threat models. Specifically, democracies are vulnerable to measures that “flood” public debate and disrupt shared decentralized understandings of actors and coalitions, in ways that autocracies are not….(More)”.

The Constitution of Knowledge


Jonathan Rauch at National Affairs: “America has faced many challenges to its political culture, but this is the first time we have seen a national-level epistemic attack: a systematic attack, emanating from the very highest reaches of power, on our collective ability to distinguish truth from falsehood. “These are truly uncharted waters for the country,” wrote Michael Hayden, former CIA director, in the Washington Post in April. “We have in the past argued over the values to be applied to objective reality, or occasionally over what constituted objective reality, but never the existence or relevance of objective reality itself.” To make the point another way: Trump and his troll armies seek to undermine the constitution of knowledge….

The attack, Hayden noted, is on “the existence or relevance of objective reality itself.” But what is objective reality?

In everyday vernacular, reality often refers to the world out there: things as they really are, independent of human perception and error. Reality also often describes those things that we feel certain about, things that we believe no amount of wishful thinking could change. But, of course, humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is in no way a guarantee of truth. Philosophers have wrestled with these problems for centuries, and today they have a pretty good working definition of objective reality. It is a set of propositions: propositions that have been validated in some way, and have thereby been shown to be at least conditionally true — true, that is, unless debunked. Some of these propositions reflect the world as we perceive it (e.g., “The sky is blue”). Others, like claims made by quantum physicists and abstract mathematicians, appear completely removed from the world of everyday experience.

It is worth noting, however, that the locution “validated in some way” hides a cheat. In what way? Some Americans believe Elvis Presley is alive. Should we send him a Social Security check? Many people believe that vaccines cause autism, or that Barack Obama was born in Africa, or that the murder rate has risen. Who should decide who is right? And who should decide who gets to decide?

This is the problem of social epistemology, which concerns itself with how societies come to some kind of public understanding about truth. It is a fundamental problem for every culture and country, and the attempts to resolve it go back at least to Plato, who concluded that a philosopher king (presumably someone like Plato himself) should rule over reality. Traditional tribal communities frequently use oracles to settle questions about reality. Religious communities use holy texts as interpreted by priests. Totalitarian states put the government in charge of objectivity.

There are many other ways to settle questions about reality. Most of them are terrible because they rely on authoritarianism, violence, or, usually, both. As the great American philosopher Charles Sanders Peirce said in 1877, “When complete agreement could not otherwise be reached, a general massacre of all who have not thought in a certain way has proved a very effective means of settling opinion in a country.”

As Peirce implied, one way to avoid a massacre would be to attain unanimity, at least on certain core issues. No wonder we hanker for consensus. Something you often hear today is that, as Senator Ben Sasse put it in an interview on CNN, “[W]e have a risk of getting to a place where we don’t have shared public facts. A republic will not work if we don’t have shared facts.”

But that is not quite the right answer, either. Disagreement about core issues and even core facts is inherent in human nature and essential in a free society. If unanimity on core propositions is not possible or even desirable, what is necessary to have a functional social reality? The answer is that we need an elite consensus, and hopefully also something approaching a public consensus, on the method of validating propositions. We needn’t and can’t all agree that the same things are true, but a critical mass needs to agree on what it is we do that distinguishes truth from falsehood, and more important, on who does it.

Who can be trusted to resolve questions about objective truth? The best answer turns out to be no one in particular….(More)”.

Data Collaboration, Pooling and Hoarding under Competition Law


Paper by Bjorn Lundqvist: “In the Internet of Things era devices will monitor and collect data, whilst device producing firms will store, distribute, analyse and re-use data on a grand scale. Great deal of data analytics will be used to enable firms to understand and make use of the collected data. The infrastructure around the collected data is controlled and access to the data flow is thus restricted on technical, but also on legal grounds. Legally, the data are being obscured behind a thicket of property rights, including intellectual property rights. Therefore, there is no general “data commons” for everyone to enjoy.

If firms would like to combine data, they need to give each other access either by sharing, trading, or pooling the data. On the one hand, industry-wide pooling of data could increase efficiency of certain services, and contribute to the innovation of other services, e.g., think about self-driven cars or personalized medicine. On the other hand, firms combining business data may use the data, not to advance their services or products, but to collude, to exclude competitors or to abuse their market position. Indeed by combining their data in a pool, they can gain market power, and, hence, the ability to violate competition law. Moreover, we also see firms hoarding data from various source creating de facto data pools. This article will discuss what implications combining data in data pools by firms might have on competition, and when competition law should be applicable. It develops the idea that data pools harbour great opportunities, whilst acknowledging that there are still risks to take into consideration, and to regulate….(More)”.

Why We Need to Audit Algorithms


James Guszcza, Iyad Rahwan, Will Bible, Manuel Cebrian and Vic Katyal at Harvard Business Review: “Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biasesaccelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?

Indeed, some forward thinking regulators are beginning to explore this possibility. For example, the EU’s General Data Protection Regulation (GDPR) requires that organizations be able to explain their algorithmic decisions. The city of New York recently assembled a task force to study possible biases in algorithmic decision systems. It is reasonable to anticipate that emerging regulations might be met with market pull for services involving algorithmic accountability.

So what might an algorithm auditing discipline look like? First, it should adopt a holistic perspective. Computer science and machine learning methods will be necessary, but likely not sufficient foundations for an algorithm auditing discipline. Strategic thinking, contextually informed professional judgment, communication, and the scientific method are also required.

As a result, algorithm auditing must be interdisciplinary in order for it to succeed….(More)”.

Reimagining Public-Private Partnerships: Four Shifts and Innovations in Sharing and Leveraging Private Assets and Expertise for the Public Good


Blog by Stefaan G. Verhulst and Andrew J. Zahuranec: “For years, public-private partnerships (PPPs) have promised to help governments do more for less. Yet, the discussion and experimentation surrounding PPPs often focus on outdated models and narratives, and the field of experimentation has not fully embraced the opportunities provided by an increasingly networked and data-rich private sector.

Private-sector actors (including businesses and NGOs) have expertise and assets that, if brought to bear in collaboration with the public sector, could spur progress in addressing public problems or providing public services. Challenges to date have largely involved the identification of effective and legitimate means for unlocking the public value of private-sector expertise and assets. Those interested in creating public value through PPPs are faced with a number of questions, including:

  • How do we broaden and deepen our understanding of PPPs in the 21st Century?
  • How can we innovate and improve the ways that PPPs tap into private-sector assets and expertise for the public good?
  • How do we connect actors in the PPP space with open governance developments and practices, especially given that PPPs have not played a major role in the governance innovation space to date?

The PPP Knowledge Lab defines a PPP as a “long-term contract between a private party and a government entity, for providing a public asset or service, in which the private party bears significant risk and management responsibility and remuneration is linked to performance.”…

To maximize the value of PPPs, we don’t just need new tools or experiments but new models for using assets and expertise in different sectors. We need to bring that capacity to public problems.

At the latest convening of the MacArthur Foundation Research Network on Opening Governance, Network members and experts from across the field tried to chart this new course by exploring questions about the future of PPPs.

The group explored the new research and thinking that enables many new types of collaboration beyond the typical “contract” based approaches. Through their discussions, Network members identified four shifts representing ways that cross-sector collaboration could evolve in the future:

  1. From Formal to Informal Trust Mechanisms;
  2. From Selection to Iterative and Inclusive Curation;
  3. From Partnership to Platform; and
  4. From Shared Risk to Shared Outcome….(More)”.
Screen Shot 2018-11-09 at 6.07.40 PM

A Hippocratic Oath for Technologists


Chapter by Ali Abbas, Max Senges and Ronald A. Howard in “Next Generation Ethics: Engineering a Better Society” (2018): “…presents an ethical creed, which we refer to as the Hippocratic Oath for Technologists. The creed is built on three fundamental pillars: proactively understanding the ethical implications of technology for all stakeholders, telling the truth about the capabilities, advantages, and disadvantages of a technology, and acting responsibly in situations you find morally challenging.

The oath may be taken by students at Universities after understanding its basic definitions and implications, and it may also be discussed with technology firms and human resources departments to provide the necessary support and understanding for their employees who wish to abide by the norms of this oath. This work lays the foundations for the arguments and requirements of a unified movement, as well as a forum for signing up for the oath to enable its wide-spread dissemination….(More)”.

Welcome to ShareTown


Jenni Lloyd and Alice Casey at Nesta: “Today, we’re pleased to welcome you to ShareTown. Our fictional town and its cast of characters sets out an unashamedly positive vision of a preferred future in which interactions between citizens and local government are balanced and collaborative, and data and digital platforms are deployed for public benefit rather than private gain.

In this future, government plays a plurality of roles, working closely with local people to understand their needs, how these can best be met and by whom. Provided with new opportunities to connect and collaborate with others, individuals and households are free to navigate, combine and contribute to different services as they see fit….

…the ShareLab team wanted to find a route by which we could explore how people’s needs can be put at the centre of services, using collaborative models for organising and ownership, aided by platform technology. And to do this we decided to be radically optimistic and focus on a preferred future in which those ideas that are currently emerging at the edges have become the norm.

Futures Cone from Nesta’s report ‘Don't Stop Thinking About Tomorrow: A modest defence of futurology’

Futures Cone from Nesta’s report ‘Don’t Stop Thinking About Tomorrow: A modest defence of futurology’

ShareTown is not intended as a prediction, but a source of inspiration – and provocation. If, as theatre-maker Annette Mees says, the future is fictional and the fictions created about it help us set our direction of travel, then the making of stories about the future we want should be something we can all be involved in – not just the media, politicians, or brands…. (More)”.

What difference does data make? Data management and social change


Paper by Morgan E. Currie and Joan M. Donovan: “The purpose of this paper is to expand on emergent data activism literature to draw distinctions between different types of data management practices undertaken by groups of data activists.

The authors offer three case studies that illuminate the data management strategies of these groups. Each group discussed in the case studies is devoted to representing a contentious political issue through data, but their data management practices differ in meaningful ways. The project Making Sense produces their own data on pollution in Kosovo. Fatal Encounters collects “missing data” on police homicides in the USA. The Environmental Data Governance Initiative hopes to keep vulnerable US data on climate change and environmental injustices in the public domain.

In analysing our three case studies, the authors surface how temporal dimensions, geographic scale and sociotechnical politics influence their differing data management strategies….(More)”.

All Data Are Local: Thinking Critically in a Data-Driven Society


Book by  Yanni Alexander Loukissas: “In our data-driven society, it is too easy to assume the transparency of data. Instead, Yanni Loukissas argues in All Data Are Local, we should approach data sets with an awareness that data are created by humans and their dutiful machines, at a time, in a place, with the instruments at hand, for audiences that are conditioned to receive them. All data are local. The term data set implies something discrete, complete, and portable, but it is none of those things. Examining a series of data sources important for understanding the state of public life in the United States—Harvard’s Arnold Arboretum, the Digital Public Library of America, UCLA’s Television News Archive, and the real estate marketplace Zillow—Loukissas shows us how to analyze data settings rather than data sets.

Loukissas sets out six principles: all data are local; data have complex attachments to place; data are collected from heterogeneous sources; data and algorithms are inextricably entangled; interfaces recontextualize data; and data are indexes to local knowledge. He then provides a set of practical guidelines to follow. To make his argument, Loukissas employs a combination of qualitative research on data cultures and exploratory data visualizations. Rebutting the “myth of digital universalism,” Loukissas reminds us of the meaning-making power of the local….(More)”.

These patients are sharing their data to improve healthcare standards


Article by John McKenna: “We’ve all heard about donating blood, but how about donating data?

Chronic non-communicable diseases (NCDs) like diabetes, heart disease and epilepsy are predicted by the World Health Organization to account for 57% of all disease by 2020.

Heart disease and stroke are the world’s biggest killers.

This has led some experts to call NCDs the “greatest challenge to global health”.

Could data provide the answer?

Today over 600,000 patients from around the world share data on more than 2,800 chronic diseases to improve research and treatment of their conditions.

People who join the PatientsLikeMe online community share information on everything from their medication and treatment plans to their emotional struggles.

Many of the participants say that it is hugely beneficial just to know there is someone else out there going through similar experiences.

But through its use of data, the platform also has the potential for far more wide-ranging benefits to help improve the quality of life for patients with chronic conditions.

Give data, get data

PatientsLikeMe is one of a swathe of emerging data platforms in the healthcare sector helping provide a range of tech solutions to health problems, including speeding up the process of clinical trials using Real Time Data Analysis or using blockchain to enable the secure sharing of patient data.

Its philosophy is “give data, get data”. In practice it means that every patient using the website has access to an array of crowd-sourced information from the wider community, such as common medication side-effects, and patterns in sufferers’ symptoms and behaviour….(More)”.