A New Way to Inoculate People Against Misinformation


Article by Jon Roozenbeek, Melisa Basol, and Sander van der Linden: “From setting mobile phone towers on fire to refusing critical vaccinations, we know the proliferation of misinformation online can have massive, real-world consequences.

For those who want to avert those consequences, it makes sense to try and correct misinformation. But as we now know, misinformation—both intentional and unintentional—is difficult to fight once it’s out in the digital wild. The pace at which unverified (and often false) information travels makes any attempt to catch up to, retrieve, and correct it an ambitious endeavour. We also know that viral information tends to stick, that repeated misinformation is more likely to be judged as true, and that people often continue to believe falsehoods even after they have been debunked.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

But while inoculation is a promising approach, it has its limitations. Traditional inoculation messages are issue-specific, and have often remained confined to the particular context that you want to inoculate people against. For example, an inoculation message might forewarn people that false information is circulating encouraging people to drink bleach as a cure for the coronavirus. Although that may help stop bleach drinking, this messaging doesn’t pre-empt misinformation about other fake cures. As a result, prebunking approaches haven’t easily adapted to the changing misinformation landscape, making them difficult to scale.

However, our research suggests that there may be another way to inoculate people that preserves the benefits of prebunking: it may be possible to build resistance against misinformation in general, rather than fighting it one piece at a time….(More)”.

Revenge of the Experts: Will COVID-19 Renew or Diminish Public Trust in Science?


Paper by Barry Eichengreen, Cevat Aksoy and Orkun Saka: “It is sometimes said that an effect of the COVID-19 pandemic will be heightened appreciation of the importance of scientific research and expertise. We test this hypothesis by examining how exposure to previous epidemics affected trust in science and scientists. Building on the “impressionable years hypothesis” that attitudes are durably formed during the ages 18 to 25, we focus on individuals exposed to epidemics in their country of residence at this particular stage of the life course. Combining data from a 2018 Wellcome Trust survey of more than 75,000 individuals in 138 countries with data on global epidemics since 1970, we show that such exposure has no impact on views of science as an endeavor but that it significantly reduces trust in scientists and in the benefits of their work. We also illustrate that the decline in trust is driven by the individuals with little previous training in science subjects. Finally, our evidence suggests that epidemic-induced distrust translates into lower compliance with health-related policies in the form of negative views towards vaccines and lower rates of child vaccination….(More)”.

Inside the ‘Wikipedia of Maps,’ Tensions Grow Over Corporate Influence


Corey Dickinson at Bloomberg: “What do Lyft, Facebook, the International Red Cross, the U.N., the government of Nepal and Pokémon Go have in common? They all use the same source of geospatial data: OpenStreetMap, a free, open-source online mapping service akin to Google Maps or Apple Maps. But unlike those corporate-owned mapping platforms, OSM is built on a network of mostly volunteer contributors. Researchers have described it as the “Wikipedia for maps.”

Since it launched in 2004, OpenStreetMap has become an essential part of the world’s technology infrastructure. Hundreds of millions of monthly users interact with services derived from its data, from ridehailing apps, to social media geotagging on Snapchat and Instagram, to humanitarian relief operations in the wake of natural disasters. 

But recently the map has been changing, due the growing impact of private sector companies that rely on it. In a 2019 paper published in the ISPRS International Journal of Geo-Information, a cross-institutional team of researchers traced how Facebook, Apple, Microsoft and other companies have gained prominence as editors of the map. Their priorities, the researchers say, are driving significant change to what is being mapped compared to the past. 

“OpenStreetMap’s data is crowdsourced, which has always made spectators to the project a bit wary about the quality of the data,” says Dipto Sarkar, a professor of geoscience at Carleton University in Ottawa, and one of the paper’s co-authors. “As the data becomes more valuable and is used for an ever-increasing list of projects, the integrity of the information has to be almost perfect. These companies need to make sure there’s a good map of the places they want to expand in, and nobody else is offering that, so they’ve decided to fill it in themselves.”…(More)”.

Future of Vulnerability: Humanity in the Digital Age


Report by the Australian Red Cross: “We find ourselves at the crossroads of humanity and technology. It is time to put people and society at the centre of our technological choices. To ensure that benefits are widely shared. To end the cycle of vulnerable groups benefiting least and being harmed most by new technologies.

There is an agenda for change across research, policy and practice towards responsible, inclusive and ethical uses of data and technology.
People and civil society must be at the centre of this work, involved in generating insights and developing prototypes, in evidence-based decision-making about impacts, and as part of new ‘business as usual’.

The Future of Vulnerability report invites a conversation around the complex questions that all of us collectively need to ask about the vulnerabilities frontier technologies can introduce or heighten. It also highlights opportunities for collaborative exploration to develop and promote ‘humanity first’ approaches to data and technology….(More)”.

Critical Perspectives on Open Development


Book edited by Arul Chib, Caitlin M. Bentley, and Matthew L. Smith: “Over the last ten years, “open” innovations—the sharing of information and communications resources without access restrictions or cost—have emerged within international development. But do these innovations empower poor and marginalized populations? This book examines whether, for whom, and under what circumstances the free, networked, public sharing of information and communication resources contribute (or not) toward a process of positive social transformation. The contributors offer cross-cutting theoretical frameworks and empirical analyses that cover a broad range of applications, emphasizing the underlying aspects of open innovations that are shared across contexts and domains.

The book first outlines theoretical frameworks that span knowledge stewardship, trust, situated learning, identity, participation, and power decentralization. It then investigates these frameworks across a range of institutional and country contexts, considering each in terms of the key emancipatory principles and structural impediments it seeks to address. Taken together, the chapters offer an empirically tested theoretical direction for the field….(More)”.

Collective bargaining on digital platforms and data stewardship


Paper by Astha Kapoor: “… there is a need to think of exploitation on platforms not only through the lens of labour rights but also that of data rights. In the current context, it is impossible to imagine well-being without more agency on the way data are collected, stored and used. It is imperative to envision structures through which worker communities and representatives can be more involved in determining their own data lives on platforms. There is a need to organize and mobilize workers on data rights.

One of the ways in which this can be done is through a mechanism of community data stewards who represent the needs and interests of workers to their platforms, thus negotiating and navigating the data-based decisions. This paper examines the need for data rights as a critical requirement for worker well-being in the platform economy and the ways in which it can be actualized. It argues, given that workers on platforms produce data through collective labour on and off the platform, that worker data are a community resource and should be governed by representatives of workers who can negotiate with platforms on the use of that data for workers and for the public interest. The paper analyses the opportunity for a community data steward mechanism that represents workers’ interests and intermediates on data issues, such as transparency and accountability, with offline support systems. And is also a voice to online action to address some of the injustices of the data economy. Thus, a data steward is a tool through which workers better control their data—consent, privacy and rights—better and organize online. Essentially, it is a way forward for workers to mobilize collective bargaining on data rights.

The paper covers the impact of the COVID-19 pandemic on workers’ rights and well-being. It explores the idea of community data rights on the platform economy and why collective bargaining on data is imperative for any kind of meaningful negotiation with technology companies. The role of a community data steward in reclaiming workers’ power in the platform economy is explained, concluding with policy recommendations for a community data steward structure in the Indian context….(More)”.

Public-Private Partnerships: Compound and Data Sharing in Drug Discovery and Development


Paper by Andrew M. Davis et al: “Collaborative efforts between public and private entities such as academic institutions, governments, and pharmaceutical companies form an integral part of scientific research, and notable instances of such initiatives have been created within the life science community. Several examples of alliances exist with the broad goal of collaborating toward scientific advancement and improved public welfare. Such collaborations can be essential in catalyzing breaking areas of science within high-risk or global public health strategies that may have otherwise not progressed. A common term used to describe these alliances is public-private partnership (PPP). This review discusses different aspects of such partnerships in drug discovery/development and provides example applications as well as successful case studies. Specific areas that are covered include PPPs for sharing compounds at various phases of the drug discovery process—from compound collections for hit identification to sharing clinical candidates. Instances of PPPs to support better data integration and build better machine learning models are also discussed. The review also provides examples of PPPs that address the gap in knowledge or resources among involved parties and advance drug discovery, especially in disease areas with unfulfilled and/or social needs, like neurological disorders, cancer, and neglected and rare diseases….(More)”.

Time to evaluate COVID-19 contact-tracing apps


Letter to the Editor of Nature by Vittoria Colizza et al: “Digital contact tracing is a public-health intervention. Real-time monitoring and evaluation of the effectiveness of app-based contact tracing is key for improvement and public trust.

SARS-CoV-2 is likely to become endemic in many parts of the world, and there is still no certainty about how quickly vaccination will become available or how long its protection will last. For the foreseeable future, most countries will rely on a combination of various measures, including vaccination, social distancing, mask wearing and contact tracing.

Digital contact tracing via smartphone apps was established as a new public-health intervention in many countries in 2020. Most of these apps are now at a stage at which they need to be evaluated as public-health tools. We present here five key epidemiological and public-health requirements for COVID-19 contact-tracing apps and their evaluation.

1. Integration with local health policy. App notifications should be consistent with local health policies. The app should be integrated into access to testing, medical care and advice on isolation, and should work in conjunction with conventional contact tracing where available1. Apps should be interoperable across countries, as envisaged by the European Commission’s eHealth Network.

2. High user uptake and adherence. Contact-tracing apps can reduce transmission at low levels of uptake, including for those without smartphones2. However, large numbers of users increase effectiveness3,4. An effective communication strategy that explains the apps’ role and addresses privacy concerns is essential for increasing adoption5. Design, implementation and deployment should make the apps accessible to harder-to-reach communities. Adherence to quarantine should be encouraged and supported.

3. Quarantine infectious people as accurately as possible. The purpose of contact tracing is to quarantine as many potentially infectious people as possible, but to minimize the time spent in quarantine by uninfected people. To achieve optimal performance, apps’ algorithms must be ‘tunable’, to adjust to the epidemic as it evolves6.

4. Rapid notification. The time between the onset of symptoms in an index case and the quarantine of their contacts is of key importance in COVID-19 contact tracing7,8. Where a design feature introduces a delay, it needs to be outweighed by gains in, for example, specificity, uptake or adherence. If the delays exceed the period during which most contacts transmit the disease, the app will fail to reduce transmission.

5. Ability to evaluate effectiveness transparently. The public must be provided with evidence that notifications are based on the best available data. The tracing algorithm should therefore be transparent, auditable, under oversight and subject to review. Aggregated data (not linked to individual people) are essential for evaluation of and improvement in the performance of the app. Data on local uptake at a sufficiently coarse-grained spatial resolution are equally key. As apps in Europe do not ‘geolocate’ people, this additional information can be provided by the user or through surveys. Real-time monitoring should be performed whenever possible….(More)”.

AI Ethics Needs Good Data


Paper by Angela Daly, S Kate Devitt, and Monique Mann: “In this chapter we argue that discourses on AI must transcend the language of ‘ethics’ and engage with power and political economy in order to constitute ‘Good Data’. In particular, we must move beyond the depoliticised language of ‘ethics’ currently deployed (Wagner 2018) in determining whether AI is ‘good’ given the limitations of ethics as a frame through which AI issues can be viewed. In order to circumvent these limits, we use instead the language and conceptualisation of ‘Good Data’, as a more expansive term to elucidate the values, rights and interests at stake when it comes to AI’s development and deployment, as well as that of other digital technologies.

Good Data considerations move beyond recurring themes of data protection/privacy and the FAT (fairness, transparency and accountability) movement to include explicit political economy critiques of power. Instead of yet more ethics principles (that tend to say the same or similar things anyway), we offer four ‘pillars’ on which Good Data AI can be built: community, rights, usability and politics. Overall we view AI’s ‘goodness’ as an explicly political (economy) question of power and one which is always related to the degree which AI is created and used to increase the wellbeing of society and especially to increase the power of the most marginalized and disenfranchised. We offer recommendations and remedies towards implementing ‘better’ approaches towards AI. Our strategies enable a different (but complementary) kind of evaluation of AI as part of the broader socio-technical systems in which AI is built and deployed….(More)”.

A new intelligence paradigm: how the emerging use of technology can achieve sustainable development (if done responsibly)


Peter Addo and Stefaan G. Verhulst in The Conversation: “….This month, the GovLab and the French Development Agency (AFD) released a report looking at precisely these possibilities. “Emerging Uses of Technology for Development: A New Intelligence Paradigm” examines how development practitioners are experimenting with emerging forms of technology to advance development goals. It considers when practitioners might turn to these tools and provides some recommendations to guide their application.

Broadly, the report concludes that experiments with new technologies in development have produced value and offer opportunities for progress. These technologies – which include data intelligence, artificial intelligence, collective intelligence, and embodied intelligence tools – are associated with different prospective benefits and risks. It is essential they be informed by design principles and practical considerations.

Four intelligences

The report derives its conclusions from an analysis of dozens of projects around Africa, including Senegal, Tanzania, Uganda. Linking practice and theory, this approach allows us to construct a conceptual framework that helps development practitioners allocate resources and make key decisions based on their specific circumstances. We call this framework the “four intelligences” paradigm; it offers a way to make sense of how new and emerging technologies intersect with the development field….(More)” (Full Report).

Author provided, CC BY