Can Technology Support Democracy?


Essay by Douglas Schuler: “The utopian optimism about democracy and the internet has given way to disillusionment. At the same time, given the complexity of today’s wicked problems, the need for democracy is critical. Unfortunately democracy is under attack around the world, and there are ominous signs of its retreat.

How does democracy fare when digital technology is added to the picture? Weaving technology and democracy together is risky, and technologists who begin any digital project with the conviction that technology can and will solve “problems” of democracy are likely to be disappointed. Technology can be a boon to democracy if it is informed technology.

The goal in writing this essay was to encourage people to help develop and cultivate a rich democratic sphere. Democracy has great potential that it rarely achieves. It is radical, critical, complex, and fragile. It takes different forms in different contexts. These forms are complex and the solutionism promoted by the computer industry and others is not appropriate in the case of democracies. The primary aim of technology in the service of democracy is not merely to make it easier or more convenient but to improve society’s civic intelligence, its ability to address the problems it faces effectively and equitably….(More)”.

Digital tools can be a useful bolster to democracy


Rana Foroohar at the Financial Times: “…A report by a Swedish research group called V-Dem found Taiwan was subject to more disinformation than nearly any other country, much of it coming from mainland China. Yet the popularity of pro-independence politicians is growing there, something Ms Tang views as a circular phenomenon.

When politicians enable more direct participation, the public begins to have more trust in government. Rather than social media creating “a false sense of us versus them,” she notes, decentralised technologies have “enabled a sense of shared reality” in Taiwan.

The same seems to be true in a number of other countries, including Israel, where Green party leader and former Occupy activist Stav Shaffir crowdsourced technology expertise to develop a bespoke data analysis app that allowed her to make previously opaque Treasury data transparent. She’s now heading an OECD transparency group to teach other politicians how to do the same. Part of the power of decentralised technologies is that they allow, at scale, the sort of public input on a wide range of complex issues that would have been impossible in the analogue era.

Consider “quadratic voting”, a concept that has been popularised by economist Glen Weyl, co-author of Radical Markets: Uprooting Capitalism and Democracy for a Just Society. Mr Weyl is the founder of the RadicalxChange movement, which aimsto empower a more participatory democracy. Unlike a binary “yes” or “no” vote for or against one thing, quadratic voting allows a large group of people to use a digital platform to express the strength of their desire on a variety of issues.

For example, when he headed the appropriations committee in the Colorado House of Representatives, Chris Hansen used quadratic voting to help his party quickly sort through how much of their $40m budget should be allocated to more than 100 proposals….(More)”.

The Future of Minds and Machines


Report by Aleksandra Berditchevskaia and Peter Baek: “When it comes to artificial intelligence (AI), the dominant media narratives often end up taking one of two opposing stances: AI is the saviour or the villain. Whether it is presented as the technology responsible for killer robots and mass job displacement or the one curing all disease and halting the climate crisis, it seems clear that AI will be a defining feature of our future society. However, these visions leave little room for nuance and informed public debate. They also help propel the typical trajectory followed by emerging technologies; with inevitable regularity we observe the ascent of new technologies to the peak of inflated expectations they will not be able to fulfil, before dooming them to a period languishing in the trough of disillusionment.[1]

There is an alternative vision for the future of AI development. By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way. Clearly defining the problems we want to address and focusing on solutions that result in the most collective benefit can lead us towards a better relationship between machine and human intelligence. By considering AI in the context of large-scale participatory projects across areas such as citizen science, crowdsourcing and participatory digital democracy, we can both amplify what it is possible to achieve through collective effort and shape the future trajectory of machine intelligence. We call this 21st-century collective intelligence (CI).

In The Future of Minds and Machines we introduce an emerging framework for thinking about how groups of people interface with AI and map out the different ways that AI can add value to collective human intelligence and vice versa. The framework has, in large part, been developed through analysis of inspiring projects and organisations that are testing out opportunities for combining AI & CI in areas ranging from farming to monitoring human rights violations. Bringing together these two fields is not easy. The design tensions identified through our research highlight the challenges of navigating this opportunity and selecting the criteria that public sector decision-makers should consider in order to make the most of solving problems with both minds and machines….(More)”.

New privacy-protected Facebook data for independent research on social media’s impact on democracy


Chaya Nayak at Facebook: “In 2018, Facebook began an initiative to support independent academic research on social media’s role in elections and democracy. This first-of-its-kind project seeks to provide researchers access to privacy-preserving data sets in order to support research on these important topics.

Today, we are announcing that we have substantially increased the amount of data we’re providing to 60 academic researchers across 17 labs and 30 universities around the world. This release delivers on the commitment we made in July 2018 to share a data set that enables researchers to study information and misinformation on Facebook, while also ensuring that we protect the privacy of our users.

This new data release supplants data we released in the fall of 2019. That 2019 data set consisted of links that had been shared publicly on Facebook by at least 100 unique Facebook users. It included information about share counts, ratings by Facebook’s third-party fact-checkers, and user reporting on spam, hate speech, and false news associated with those links. We have expanded the data set to now include more than 38 million unique links with new aggregated information to help academic researchers analyze how many people saw these links on Facebook and how they interacted with that content – including views, clicks, shares, likes, and other reactions. We’ve also aggregated these shares by age, gender, country, and month. And, we have expanded the time frame covered by the data from January 2017 – February 2019 to January 2017 – August 2019.

With this data, researchers will be able to understand important aspects of how social media shapes our world. They’ll be able to make progress on the research questions they proposed, such as “how to characterize mainstream and non-mainstream online news sources in social media” and “studying polarization, misinformation, and manipulation across multiple platforms and the larger information ecosystem.”

In addition to the data set of URLs, researchers will continue to have access to CrowdTangle and Facebook’s Ad Library API to augment their analyses. Per the original plan for this project, outside of a limited review to ensure that no confidential or user data is inadvertently released, these researchers will be able to publish their findings without approval from Facebook.

We are sharing this data with researchers while continuing to prioritize the privacy of people who use our services. This new data set, like the data we released before it, is protected by a method known as differential privacy. Researchers have access to data tables from which they can learn about aggregated groups, but where they cannot identify any individual user. As Harvard University’s Privacy Tools project puts it:

“The guarantee of a differentially private algorithm is that its behavior hardly changes when a single individual joins or leaves the dataset — anything the algorithm might output on a database containing some individual’s information is almost as likely to have come from a database without that individual’s information. … This gives a formal guarantee that individual-level information about participants in the database is not leaked.” …(More)”

Google redraws the borders on maps depending on who’s looking


Greg Bensinger in the Washington Post: “For more than 70 years, India and Pakistan have waged sporadic and deadly skirmishes over control of the mountainous region of Kashmir. Tens of thousands have died in the conflict, including three just this month.

Both sides claim the Himalayan outpost as their own, but Web surfers in India could be forgiven for thinking the dispute is all but settled: The borders on Google’s online maps there display Kashmir as fully under Indian control. Elsewhere, users see the region’s snaking outlines as a dotted line, acknowledging the dispute.

Google’s corporate mission is “to organize the world’s information,” but it also bends it to its will. From Argentina to the United Kingdom to Iran, the world’s borders look different depending on where you’re viewing them from. That’s because Google — and other online mapmakers — simply change them.

With some 80 percent market share in mobile maps and over a billion users, Google Maps has an outsize impact on people’s perception of the world — from driving directions to restaurant reviews to naming attractions to adjudicating historical border wars.

And while maps are meant to bring order to the world, the Silicon Valley firm’s decision-making on maps is often shrouded in secrecy, even to some of those who work to shape its digital atlases every day. It is influenced not just by history and local laws, but also the shifting whims of diplomats, policymakers and its own executives, say people familiar with the matter, who asked not to be identified because they weren’t authorized to discuss internal processes….(More)”.

Realizing the Potential of AI Localism


Stefaan G. Verhulst and Mona Sloane at Project Syndicate: “Every new technology rides a wave from hype to dismay. But even by the usual standards, artificial intelligence has had a turbulent run. Is AI a society-renewing hero or a jobs-destroying villain? As always, the truth is not so categorical.

As a general-purpose technology, AI will be what we make of it, with its ultimate impact determined by the governance frameworks we build. As calls for new AI policies grow louder, there is an opportunity to shape the legal and regulatory infrastructure in ways that maximize AI’s benefits and limit its potential harms.

Until recently, AI governance has been discussed primarily at the national level. But most national AI strategies – particularly China’s – are focused on gaining or maintaining a competitive advantage globally. They are essentially business plans designed to attract investment and boost corporate competitiveness, usually with an added emphasis on enhancing national security.

This singular focus on competition has meant that framing rules and regulations for AI has been ignored. But cities are increasingly stepping into the void, with New York, Toronto, Dubai, Yokohama, and others serving as “laboratories” for governance innovation. Cities are experimenting with a range of policies, from bans on facial-recognition technology and certain other AI applications to the creation of data collaboratives. They are also making major investments in responsible AI research, localized high-potential tech ecosystems, and citizen-led initiatives.

This “AI localism” is in keeping with the broader trend in “New Localism,” as described by public-policy scholars Bruce Katz and the late Jeremy Nowak. Municipal and other local jurisdictions are increasingly taking it upon themselves to address a broad range of environmental, economic, and social challenges, and the domain of technology is no exception.

For example, New York, Seattle, and other cities have embraced what Ira Rubinstein of New York University calls “privacy localism,” by filling significant gaps in federal and state legislation, particularly when it comes to surveillance. Similarly, in the absence of a national or global broadband strategy, many cities have pursued “broadband localism,” by taking steps to bridge the service gap left by private-sector operators.

As a general approach to problem solving, localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability….(More)”.

NGOs embrace GDPR, but will it be used against them?


Report by Vera Franz et al: “When the world’s most comprehensive digital privacy law – the EU General Data Protection Regulation (GDPR) – took effect in May 2018, media and tech experts focused much of their attention on how corporations, who hold massive amounts of data, would be affected by the law.

This focus was understandable, but it left some important questions under-examined–specifically about non-profit organizations that operate in the public’s interest. How would non-governmental organizations (NGOs) be impacted? What does GDPR compliance mean in very practical terms for NGOs? What are the challenges they are facing? Could the GDPR be ‘weaponized’ against NGOs and if so, how? What good compliance practices can be shared among non-profits?

Ben Hayes and Lucy Hannah from Data Protection Support & Management and I have examined these questions in detail and released our findings in this report.

Our key takeaway: GDPR compliance is an integral part of organisational resilience, and it requires resources and attention from NGO leaders, foundations and regulators to defend their organisations against attempts by governments and corporations to misuse the GDPR against them.

In a political climate where human rights and social justice groups are under increasing pressure, GDPR compliance needs to be given the attention it deserves by NGO leaders and funders. Lack of compliance will attract enforcement action by data protection regulators and create opportunities for retaliation by civil society adversaries.

At the same time, since the law came into force, we recognise that some NGOs have over-complied with the law, possibly diverting scarce resources and hampering operations.

For example, during our research, we discovered a small NGO that undertook an advanced and resource-intensive compliance process (a Data Protection Impact Assessment or DPIA) for all processing operations. DPIAs are only required for large-scale and high-risk processing of personal data. Yet this NGO, which holds very limited personal data and undertakes no marketing or outreach activities, engaged in this complex and time-consuming assessment because the organization was under enormous pressure from their government. They told us they “wanted to do everything possible to avoid attracting attention.”…

Our research also found that private companies, individuals and governments who oppose the work of an organisation have used GDPR to try to keep NGOs from publishing their work. To date, NGOs have successfully fought against this misuse of the law….(More)“.

The Rise and Fall of Good-Governance Promotion


Alina Mungiu-Pippidi at the Journal of Democracy: “With the 2003 adoption of the UN Convention Against Corruption, good-governance norms have achieved—on the formal level at least—a degree of recognition that can fairly be called universal. This reflects a centuries-long struggle to establish the moral principle of “ethical universalism,” which brings together the ideas of equity, reciprocity, and impartiality. The West’s success in promoting this norm has been extraordinary, yet there are also significant risks. Despite expectations that international concern and increased regulation would lead to less corruption, current trends suggest otherwise. Exchanges between countries perceived as corrupt and countries perceived as noncorrupt seem to lead to an increase in corruption in the noncorrupt states rather than its decrease in the corrupt ones. Direct good-governance interventions have had poor results. And anticorruption has helped populist politicians, who use anti-elite rhetoric similar to that of anticorruption campaigners….(More)”.

Global Standard Setting in Internet Governance


Book by Alison Harcourt, George Christou, and Seamus Simpson: “The book addresses representation of the public interest in Internet standard developing organisations (SDOs). Much of the existing literature on Internet governance focuses on international organisations such as the United Nations (UN), the Internet Governance Forum (IGF) and the Internet Corporation for Assigned Names and Numbers (ICANN). The literature covering standard developing organisations has to date focused on organisational aspects. This book breaks new ground with investigation of standard development within SDO fora. Case studies centre on standards relating to privacy and security, mobile communications, Intellectual Property Rights (IPR) and copyright. The book lifts the lid on internet standard setting with detailed insight into a world which, although highly technical, very much affects the way in which citizens live and work on a daily basis. In doing this it adds significantly to the trajectory of research on Internet standards and SDOs that explore the relationship between politics and protocols.

The analysis contributes to academic debates on democracy and the internet, global self-regulation and civil society, and international decision-making processes in unstructured environments. The book advances work on the Multiple Streams Framework (MS) by applying it to decision-making in non-state environments, namely SDOs which have long been dominated by private actors. ….(More)”

International Humanitarian and Development Aid and Big Data Governance


Chapter by Andrej Zwitter: “Modern technology and innovations constantly transform the world. This also applies to humanitarian action and development aid, for example: humanitarian drones, crowd sourcing of information, or the utility of Big Data in crisis analytics and humanitarian intelligence. The acceleration of modernization in these adjacent fields can in part be attributed to new partnerships between aid agencies and new private stakeholders that increasingly become active, such as individual crisis mappers, mobile telecommunication companies, or technological SMEs.

These partnerships, however, must be described as simultaneously beneficial as well as problematic. Many private actors do not subscribe to the humanitarian principles (humanity, impartiality, independence, and neutrality), which govern UN and NGO operations, or are not even aware of them. Their interests are not solely humanitarian, but may include entrepreneurial agendas. The unregulated use of data in humanitarian intelligence has already caused negative consequences such as the exposure of sensitive data about aid agencies and of victims of disasters.

This chapter investigates the emergent governance trends around data innovation in the humanitarian and development field. It takes a look at the ways in which the field tries to regulate itself and the utility of the humanitarian principles for Big Data analytics and data-driven innovation. It will argue that it is crucially necessary to formulate principles for data governance in the humanitarian context in order to ensure the safeguarding of beneficiaries that are particularly vulnerable. In order to do that, the chapter proposes to reinterpret the humanitarian principles to accommodate the new reality of datafication of different aspects of society…(More)”.