Chatbots Are a Danger to Democracy


Jamie Susskind in the New York Times: “As we survey the fallout from the midterm elections, it would be easy to miss the longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process.

Chatbots are software programs that are capable of conversing with human beings on social media using natural language. Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.

Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side….

We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human? Bots peddling suspect information could be challenged by moderator-bots to provide recognized sources for their claims within seconds. Those that fail would face removal.

We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate. For both those reasons, the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake….(More)”.

Reimagining Public-Private Partnerships: Four Shifts and Innovations in Sharing and Leveraging Private Assets and Expertise for the Public Good


Blog by Stefaan G. Verhulst and Andrew J. Zahuranec: “For years, public-private partnerships (PPPs) have promised to help governments do more for less. Yet, the discussion and experimentation surrounding PPPs often focus on outdated models and narratives, and the field of experimentation has not fully embraced the opportunities provided by an increasingly networked and data-rich private sector.

Private-sector actors (including businesses and NGOs) have expertise and assets that, if brought to bear in collaboration with the public sector, could spur progress in addressing public problems or providing public services. Challenges to date have largely involved the identification of effective and legitimate means for unlocking the public value of private-sector expertise and assets. Those interested in creating public value through PPPs are faced with a number of questions, including:

  • How do we broaden and deepen our understanding of PPPs in the 21st Century?
  • How can we innovate and improve the ways that PPPs tap into private-sector assets and expertise for the public good?
  • How do we connect actors in the PPP space with open governance developments and practices, especially given that PPPs have not played a major role in the governance innovation space to date?

The PPP Knowledge Lab defines a PPP as a “long-term contract between a private party and a government entity, for providing a public asset or service, in which the private party bears significant risk and management responsibility and remuneration is linked to performance.”…

To maximize the value of PPPs, we don’t just need new tools or experiments but new models for using assets and expertise in different sectors. We need to bring that capacity to public problems.

At the latest convening of the MacArthur Foundation Research Network on Opening Governance, Network members and experts from across the field tried to chart this new course by exploring questions about the future of PPPs.

The group explored the new research and thinking that enables many new types of collaboration beyond the typical “contract” based approaches. Through their discussions, Network members identified four shifts representing ways that cross-sector collaboration could evolve in the future:

  1. From Formal to Informal Trust Mechanisms;
  2. From Selection to Iterative and Inclusive Curation;
  3. From Partnership to Platform; and
  4. From Shared Risk to Shared Outcome….(More)”.
Screen Shot 2018-11-09 at 6.07.40 PM

Giving Voice to Patients: Developing a Discussion Method to Involve Patients in Translational Research


Paper by Marianne Boenink, Lieke van der Scheer, Elisa Garcia and Simone van der Burg in NanoEthics: “Biomedical research policy in recent years has often tried to make such research more ‘translational’, aiming to facilitate the transfer of insights from research and development (R&D) to health care for the benefit of future users. Involving patients in deliberations about and design of biomedical research may increase the quality of R&D and of resulting innovations and thus contribute to translation. However, patient involvement in biomedical research is not an easy feat. This paper discusses the development of a method for involving patients in (translational) biomedical research aiming to address its main challenges.

After reviewing the potential challenges of patient involvement, we formulate three requirements for any method to meaningfully involve patients in (translational) biomedical research. It should enable patients (1) to put forward their experiential knowledge, (2) to develop a rich view of what an envisioned innovation might look like and do, and (3) to connect their experiential knowledge with the envisioned innovation. We then describe how we developed the card-based discussion method ‘Voice of patients’, and discuss to what extent the method, when used in four focus groups, satisfied these requirements. We conclude that the method is quite successful in mobilising patients’ experiential knowledge, in stimulating their imaginaries of the innovation under discussion and to some extent also in connecting these two. More work is needed to translate patients’ considerations into recommendations relevant to researchers’ activities. It also seems wise to broaden the audience for patients’ considerations to other actors working on a specific innovation….(More)”