Afghan people face an impossible choice over their digital footprint


Nighat Dad at New Scientist: “The swift progress of the Taliban in Afghanistan has been truly shocking…Though the Taliban spokesperson Zabihullah Mujahid told the press conference that it wouldn’t be seeking “revenge” against people who had opposed them, many Afghan people are understandably still worried. On top of this, they — including those who worked with Western forces and international NGOs, as well as foreign journalists — have been unable to leave the country, as flight capacity has been taken over by Western countries evacuating their citizens.

As such, people have been attempting to move quickly to erase their digital footprints, built up during the 20 years of the previous US-backed governments. Some Afghan activists have been reaching out to me directly to help them put in place robust mobile security and asking how to trigger a mass deletion of their data.

The last time the Taliban was in power, social media barely existed and smartphones had yet to take off. Now, around 4 million people in Afghanistan regularly use social media. Yet, despite the huge rise of digital technologies, a comparative rise in digital security hasn’t happened.

There are few digital security resources that are suitable for people in Afghanistan to use. The leading guide on how to properly delete your digital history by Human Rights First is a brilliant place to start. But unfortunately it is only available in English and unofficially in Farsi. There are also some other guides available in Farsi thanks to the thriving community of tech enthusiasts who have been working for human rights activists living in Iran for years.

However, many of these guides will still be unintelligible for those in Afghanistan who speak Dari or Pashto, for example…

People in Afghanistan who worked with Western forces also face an impossible choice as countries where they might seek asylum often require digital proof of their collaboration. Keep this evidence and they risk persecution from the Taliban, delete it and they may find their only way out no longer available.

Millions of people’s lives will now be vastly different due to the regime change. Digital security feels like one thing that could have been sorted out in advance. We are yet to see exactly how Taliban 2.0 will be different to that which went before. And while the so-called War on Terror appears to be over, I fear a digital terror offensive may just be beginning…(More).

Remove obstacles to sharing health data with researchers outside of the European Union


Heidi Beate Bentzen et al in Nature: “International sharing of pseudonymized personal data among researchers is key to the advancement of health research and is an essential prerequisite for studies of rare diseases or subgroups of common diseases to obtain adequate statistical power.

Pseudonymized personal data are data on which identifiers such as names are replaced by codes. Research institutions keep the ‘code key’ that can link an individual person to the data securely and separately from the research data and thereby protect privacy while preserving the usefulness of data for research. Pseudonymized data are still considered personal data under the General Data Protection Regulation (GDPR) 2016/679 of the European Union (EU) and, therefore, international transfers of such data need to comply with GDPR requirements. Although the GDPR does not apply to transfers of anonymized data, the threshold for anonymity under the GDPR is very high; hence, rendering data anonymous to the level required for exemption from the GDPR can diminish the usefulness of the data for research and is often not even possible.

The GDPR requires that transfers of personal data to international organizations or countries outside the European Economic Area (EEA)—which comprises the EU Member States plus Iceland, Liechtenstein and Norway—be adequately protected. Over the past two years, it has become apparent that challenges emerge for the sharing of data with public-sector researchers in a majority of countries outside of the EEA, as only a few decisions stating that a country offers an adequate level of data protection have so far been issued by the European Commission. This is a problem, for example, with researchers at federal research institutions in the United States. Transfers to international organizations such as the World Health Organization are similarly affected. Because these obstacles ultimately affect patients as beneficiaries of research, solutions are urgently needed. The European scientific academies have recently published a report explaining the consequences of stalled data transfers and pushing for responsible solutions…(More)”.

The controversy over the term ‘citizen science’


CBC News: “The term citizen science has been around for decades. Its original definition, coined in the 1990s, refers to institution-guided projects that invite the public to contribute to scientific knowledge in all kinds of ways, from the cataloguing of plants, animals and insects in people’s backyards to watching space.

Anyone is invited to participate in citizen science, regardless of whether they have an academic background in the sciences, and every year these projects number in the thousands. 

Recently, however, some large institutions, scientists and community members have proposed replacing the term citizen science with “community science.” 

Those in favour of the terminology change — such as eBird, one of the world’s largest biodiversity databases — say they want to avoid using the word citizen. They do so because they want to be “welcoming to any birder or person who wants to learn more about bird watching, regardless of their citizen status,” said Lynn Fuller, an eBird spokesperson, in a news release earlier this year. 

Some argue that while the intention is valid, the term community science already holds another definition — namely projects that gather different groups of people around environmental justice focused on social action. 

To add to the confusion, renaming citizen science could impact policies and legislation that have been established in countries such as the U.S. and Canada to support projects and efforts in favour of citizen science. 

For example, if we suddenly decided to call all species of birds “waterbirds,” then the specific meaning of this category of bird species that lives on or around water would eventually be lost. This would, in turn, make communication between people and the various fields of science incredibly difficult. 

A paper published in Science magazine last month pointed out some of the reasons why rebranding citizen science in the name of inclusion could backfire. 

Caren Cooper, a professor of forestry and environmental resources at North Carolina State University and one of the authors of the paper, said that the term citizen science didn’t originally mean to imply that people should have a certain citizenship status to participate in such projects. 

Rather, citizen science is meant to convey the idea of responsibilities and rights to access science. 

She said there are other terms being used to describe this meaning, including “public science, participatory science [and] civic science.”

Chris Hawn, a professor of geography and environmental systems at the University of Maryland Baltimore County and one of Cooper’s co-authors, said that being aware of the need for change is a good first step, but any decision to rename should be made carefully….(More)”.

Indigenous Peoples Rise Up: The Global Ascendency of Social Media Activism


Book edited by Bronwyn Carlson and Jeff Berglund: “…llustrates the impact of social media in expanding the nature of Indigenous communities and social movements. Social media has bridged distance, time, and nation states to mobilize Indigenous peoples to build coalitions across the globe and to stand in solidarity with one another. These movements have succeeded and gained momentum and traction precisely because of the strategic use of social media. Social media—Twitter and Facebook in particular—has also served as a platform for fostering health, well-being, and resilience, recognizing Indigenous strength and talent, and sustaining and transforming cultural practices when great distances divide members of the same community.
 
Including a range of international indigenous voices from the US, Canada, Australia, Aotearoa (New Zealand) and Africa, the book takes an interdisciplinary approach, bridging Indigenous studies, media studies, and social justice studies. Including examples like Idle No More in Canada, Australian Recognise!, and social media campaigns to maintain Maori language, Indigenous Peoples Rise Up serves as one of the first studies of Indigenous social media use and activism…(More)”.

Designing data collaboratives to better understand human mobility and migration in West Africa



“The Big Data for Migration Alliance (BD4M) is released the report, “Designing Data Collaboratives to Better Understand Human Mobility and Migration in West Africa,” providing findings from a first-of-its-kind rapid co-design and prototyping workshop, or “Studio.” The first BD4M Studio convened over 40 stakeholders in government, international organizations, research, civil society, and the public sector to develop concrete strategies for developing and implementing cross- sectoral data partnerships, or “data collaboratives,” to improve ethical and secure access to data for migration-related policymaking and research in West Africa.

BD4M is an effort spearheaded by the International Organization for Migration’s Global Migration Data Analysis Centre (IOM GMDAC), European Commission’s Joint Research Centre (JRC), and The GovLab to accelerate the responsible and ethical use of novel data sources and methodologies—such as social media, mobile phone data, satellite imagery, artificial intelligence—to support migration-related programming and policy on the global, national, and local levels. 

The BD4M Studio was informed by The Migration Domain of The 100 Questions Initiative — a global agenda-setting exercise to define the most impactful questions related to migration that could be answered through data collaboration. Inspired by the outputs of The 100 Questions, Studio participants designed data collaboratives that could produce answers to three key questions: 

  1. How can data be used to estimate current cross-border migration and mobility by sex and age in West Africa?
  2.  How can data be used to assess the current state of diaspora communities and their migration behavior in the region?
  3. How can we use data to better understand the drivers of migration in West Africa?…(More)”

Developing a Responsible and Well-designed Governance Structure for Data Marketplaces


WEF Briefing Paper: “… extracts insights from the discussions with thought leaders and experts to serve as a point of departure for governments and other members of the global community to discuss governance structures and regulatory frameworks for Data Marketplace Service Providers (DMSPs), the primary operators and managers of data exchanges as trusted third parties, in data marketplaces and exchanges in a wide range of jurisdictions. As decision-makers globally develop data marketplace solutions specific to their unique cultural nuances and needs, this paper provides insights into key governance issues to get right and do so with global interoperability and adaptability in mind….(More)”.

Off-Label: How tech platforms decide what counts as journalism


Essay by Emily Bell: “…But putting a stop to militarized fascist movements—and preventing another attack on a government building—will ultimately require more than content removal. Technology companies need to fundamentally recalibrate how they categorize, promote, and circulate everything under their banner, particularly news. They have to acknowledge their editorial responsibility.

The extraordinary power of tech platforms to decide what material is worth seeing—under the loosest possible definition of who counts as a “journalist”—has always been a source of tension with news publishers. These companies have now been put in the position of being held accountable for developing an information ecosystem based in fact. It’s unclear how much they are prepared to do, if they will ever really invest in pro-truth mechanisms on a global scale. But it is clear that, after the Capitol riot, there’s no going back to the way things used to be.

Between 2016 and 2020, Facebook, Twitter, and Google made dozens of announcements promising to increase the exposure of high-quality news and get rid of harmful misinformation. They claimed to be investing in content moderation and fact-checking; they assured us that they were creating helpful products like the Facebook News Tab. Yet the result of all these changes has been hard to examine, since the data is both scarce and incomplete. Gordon Crovitz—a former publisher of the Wall Street Journal and a cofounder of NewsGuard, which applies ratings to news sources based on their credibility—has been frustrated by the lack of transparency: “In Google, YouTube, Facebook, and Twitter we have institutions that we know all give quality ratings to news sources in different ways,” he told me. “But if you are a news organization and you want to know how you are rated, you can ask them how these systems are constructed, and they won’t tell you.” Consider the mystery behind blue-check certification on Twitter, or the absurdly wide scope of the “Media/News” category on Facebook. “The issue comes down to a fundamental failure to understand the core concepts of journalism,” Crovitz said.

Still, researchers have managed to put together a general picture of how technology companies handle various news sources. According to Jennifer Grygiel, an assistant professor of communications at Syracuse University, “we know that there is a taxonomy within these companies, because we have seen them dial up and dial down the exposure of quality news outlets.” Internally, platforms rank journalists and outlets and make certain designations, which are then used to develop algorithms for personalized news recommendations and news products….(More)”

Human Rights Are Not A Bug: Upgrading Governance for an Equitable Internet


Report by Niels ten Oever: “COVID-19 showed how essential the Internet is, as people around the globe searched for critical health information, kept up with loved ones and worked remotely. All of this relied on an often unseen Internet infrastructure, consisting of myriad devices, institutions, and standards that kept them connected.

But who governs the patchwork that enables this essential utility? Internet governance organizations like the Internet Engineering Task Force develop the technical foundations of the Internet. Their decisions are high stakes, and impact security, access to information, freedom of expression and other human rights. Yet they can only set voluntary norms and protocols for industry behavior, and there is no central authority to ensure that standards are implemented correctly. Further, while Internet governance bodies are open to all sectors, they are dominated by the transnational corporations that own and operate much of the infrastructure. Thus our increasingly digital daily lives are defined by the interests of corporations, not of the public interest….

In this comprehensive, field-setting report published with the support of the Ford Foundation, Niels ten Oever, a postdoctoral researcher in Internet infrastructure at the University of Amsterdam, unpacks and looks at the human consequences of these governance flaws, from speed and access to security and privacy of online information. The report details how these flaws especially impact those who are already subject to surveillance or structural inequities, such as an activist texting meeting times on WhatsApp, or a low-income senior looking for a vaccine appointment….(More)”.

Ethical Governance of Artificial Intelligence in the Public Sector


Book by Liza Ireni-Saban and Maya Sherman: “This book argues that ethical evaluation of AI should be an integral part of public service ethics and that an effective normative framework is needed to provide ethical principles and evaluation for decision-making in the public sphere, at both local and international levels.

It introduces how the tenets of prudential rationality ethics, through critical engagement with intersectionality, can contribute to a more successful negotiation of the challenges created by technological innovations in AI and afford a relational, interactive, flexible and fluid framework that meets the features of AI research projects, so that core public and individual values are still honoured in the face of technological development….(More)”.

Making life richer, easier and healthier: Robots, their future and the roles for public policy


OECD Paper: “This paper addresses the current and emerging uses and impacts of robots, the mid-term future of robotics and the role of policy. Progress in robotics will help to make life easier, richer and healthier. Wider robot use will help raise labour productivity. As science and engineering progress, robots will become more central to crisis response, from helping combat infectious diseases to maintaining critical infrastructure. Governments can accelerate and orient the development and uptake of socially valuable robots, for instance by: supporting cross-disciplinary R&D, facilitating research commercialisation, helping small and medium-size enterprises (SMEs) understand the opportunities for investment in robots, supporting platforms that highlight robot solutions in healthcare and other sectors, embedding robotics engineering in high school curricula, tailoring training for workers with vocational-level mechanical skills, supporting data development useful to robotics, ensuring flexible regulation conducive to innovation, strengthening digital connectivity, and raising awareness of the importance of robotics….(More)