Afghan people face an impossible choice over their digital footprint


Nighat Dad at New Scientist: “The swift progress of the Taliban in Afghanistan has been truly shocking…Though the Taliban spokesperson Zabihullah Mujahid told the press conference that it wouldn’t be seeking “revenge” against people who had opposed them, many Afghan people are understandably still worried. On top of this, they — including those who worked with Western forces and international NGOs, as well as foreign journalists — have been unable to leave the country, as flight capacity has been taken over by Western countries evacuating their citizens.

As such, people have been attempting to move quickly to erase their digital footprints, built up during the 20 years of the previous US-backed governments. Some Afghan activists have been reaching out to me directly to help them put in place robust mobile security and asking how to trigger a mass deletion of their data.

The last time the Taliban was in power, social media barely existed and smartphones had yet to take off. Now, around 4 million people in Afghanistan regularly use social media. Yet, despite the huge rise of digital technologies, a comparative rise in digital security hasn’t happened.

There are few digital security resources that are suitable for people in Afghanistan to use. The leading guide on how to properly delete your digital history by Human Rights First is a brilliant place to start. But unfortunately it is only available in English and unofficially in Farsi. There are also some other guides available in Farsi thanks to the thriving community of tech enthusiasts who have been working for human rights activists living in Iran for years.

However, many of these guides will still be unintelligible for those in Afghanistan who speak Dari or Pashto, for example…

People in Afghanistan who worked with Western forces also face an impossible choice as countries where they might seek asylum often require digital proof of their collaboration. Keep this evidence and they risk persecution from the Taliban, delete it and they may find their only way out no longer available.

Millions of people’s lives will now be vastly different due to the regime change. Digital security feels like one thing that could have been sorted out in advance. We are yet to see exactly how Taliban 2.0 will be different to that which went before. And while the so-called War on Terror appears to be over, I fear a digital terror offensive may just be beginning…(More).

Mathematicians are deploying algorithms to stop gerrymandering


Article by Siobhan Roberts: “The maps for US congressional and state legislative races often resemble electoral bestiaries, with bizarrely shaped districts emerging from wonky hybrids of counties, precincts, and census blocks.

It’s the drawing of these maps, more than anything—more than voter suppression laws, more than voter fraud—that determines how votes translate into who gets elected. “You can take the same set of votes, with different district maps, and get very different outcomes,” says Jonathan Mattingly, a mathematician at Duke University in the purple state of North Carolina. “The question is, if the choice of maps is so important to how we interpret these votes, which map should we choose, and how should we decide if someone has done a good job in choosing that map?”

Over recent months, Mattingly and like-minded mathematicians have been busy in anticipation of a data release expected today, August 12, from the US Census Bureau. Every decade, new census data launches the decennial redistricting cycle—state legislators (or sometimes appointed commissions) draw new maps, moving district lines to account for demographic shifts.

In preparation, mathematicians are sharpening new algorithms—open-source tools, developed over recent years—that detect and counter gerrymandering, the egregious practice giving rise to those bestiaries, whereby politicians rig the maps and skew the results to favor one political party over another. Republicans have openly declared that with this redistricting cycle they intend to gerrymander a path to retaking the US House of Representatives in 2022….(More)”.

The Illusion of Inclusion — The “All of Us” Research Program and Indigenous Peoples’ DNA


Article by Keolu Fox: “Raw data, including digital sequence information derived from human genomes, have in recent years emerged as a top global commodity. This shift is so new that experts are still evaluating what such information is worth in a global market. In 2018, the direct-to-consumer genetic-testing company 23andMe sold access to its database containing digital sequence information from approximately 5 million people to GlaxoSmithKline for $300 million. Earlier this year, 23andMe partnered with Almirall, a Spanish drug company that is using the information to develop a new antiinflammatory drug for autoimmune disorders. This move marks the first time that 23andMe has signed a deal to license a drug for development.

Eighty-eight percent of people included in large-scale studies of human genetic variation are of European ancestry, as are the majority of participants in clinical trials. Corporations such as Geisinger Health System, Regeneron Pharmaceuticals, AncestryDNA, and 23andMe have already mined genomic databases for the strongest genotype–phenotype associations. For the field to advance, a new approach is needed. There are many potential ways to improve existing databases, including “deep phenotyping,” which involves collecting precise measurements from blood panels, questionnaires, cognitive surveys, and other tests administered to research participants. But this approach is costly and physiologically and mentally burdensome for participants. Another approach is to expand existing biobanks by adding genetic information from populations whose genomes have not yet been sequenced — information that may offer opportunities for discovering globally rare but locally common population-specific variants, which could be useful for identifying new potential drug targets.

Many Indigenous populations have been geographically isolated for tens of thousands of years. Over time, these populations have developed adaptations to their environments that have left specific variant signatures in their genomes. As a result, the genomes of Indigenous peoples are a treasure trove of unexplored variation. Some of this variation will inevitably be identified by programs like the National Institutes of Health (NIH) “All of Us” research program. NIH leaders have committed to the idea that at least 50% of this program’s participants should be members of underrepresented minority populations, including U.S. Indigenous communities (Native Americans, Alaskan Natives, and Native Hawaiians), a decision that explicitly connects diversity with the program’s goal of promoting equal enjoyment of the future benefits of precision medicine.

But there are reasons to believe that this promise may be an illusion….(More)”.

Manifesto Destiny


Essay by Lidija Haas: “Manifesto is the form that eats and repeats itself. Always layered and paradoxical, it comes disguised as nakedness, directness, aggression. An artwork aspiring to be a speech act—like a threat, a promise, a joke, a spell, a dare. You can’t help but thrill to language that imagines it can get something done. You also can’t help noticing the similar demands and condemnations that ring out across the decades and the centuries—something will be swept away or conjured into being, and it must happen right this moment. While appearing to invent itself ex nihilo, the manifesto grabs whatever magpie trinkets it can find, including those that drew the eye in earlier manifestos. This is a form that asks readers to suspend their disbelief, and so like any piece of theater, it trades on its own vulnerability, invites our complicity, as if only the quality of our attention protects it from reality’s brutal puncture. A manifesto is a public declaration of intent, a laying out of the writer’s views (shared, it’s implied, by at least some vanguard “we”) on how things are and how they should be altered. Once the province of institutional authority, decrees from church or state, the manifesto later flowered as a mode of presumption and dissent. You assume the writer stands outside the halls of power (or else, occasionally, chooses to pose and speak from there). Today the US government, for example, does not issue manifestos, lest it sound both hectoring and weak. The manifesto is inherently quixotic—spoiling for a fight it’s unlikely to win, insisting on an outcome it lacks the authority to ensure.

Somewhere a manifesto is always being scrawled, but the ones that survive have usually proliferated at times of ferment and rebellion, like the pamphlets of the Diggers in seventeenth-century England, or the burst of exhortations that surrounded the French Revolution, including, most memorably, Mary Wollstonecraft’s 1792 A Vindication of the Rights of Woman. The manifesto is a creature of the Enlightenment: its logic depends on ideals of sovereign reason, social progress, a universal subject on whom equal rights should (must) be bestowed. Still unsurpassed as a model (for style, force, economy, ambition) is Marx and Engels’s 1848 Communist Manifesto, crammed with killer lines, which Marshall Berman called “the first great modernist work of art.” In its wake came the Futurists—“We wish to destroy museums, libraries, academies of any sort, and fight against moralism, feminism, and every kind of materialistic, self-serving cowardice”—and the great flood of manifestos by artists, activists, and other renegades in the decades after 1910, followed by another peak in the 1960s and ’70s.

After that point, fewer broke through the general noise, though those that have lasted cast a weird light back on what came before: Donna J. Haraway’s postmodern 1985 “A Cyborg Manifesto,” for instance, in refusing fantasies of wholeness, purity, full communication—“The feminist dream of a common language, like all dreams . . . of perfectly faithful naming of experience, is a totalizing and imperialist one”—presents the manifesto as a form that can speak from the corner of its mouth, that always says more and less than it appears to say, that teases and exaggerates, that usefully undermines itself. Haraway makes an explicit case for “serious play” and for irreconcilable contradictions, introducing her “effort to build an ironic political myth faithful to feminism, socialism, and materialism. . . . More faithful as blasphemy is faithful, than as reverent worship and identification.” By directly announcing its own tricksiness (an extra contradiction in itself), “A Cyborg Manifesto” seems both to critique its predecessors and to hint that even the most overweening of them were never quite designed to be read straight….(More)”.

Generationalism is bad science


Essay by Cort W Rudolph: “Millennials – the much-maligned generation of people who, according to the Pew Research Center, were born between 1981 and 1996 – started turning 40 this year. This by itself is not very remarkable, but a couple of related facts bear consideration. In the United States, legislation that protects ‘older workers’ from discrimination applies to those aged 40 and over. There is a noteworthy irony here: a group of people who have long been branded negatively by their elders and accused of ‘killing’ cultural institutions ranging from marriage to baseball to marmalade are now considered ‘older’ in the eyes of the government. Inevitably, the latest round of youngsters grows up, complicating the stereotypes attached to them in youth. More importantly, though, the concept of a discrete generation of ‘millennials’ – like that of the ‘Generation X’ that preceded these people, the ‘Generation Z’ that will soon follow them into middle adulthood, and indeed the entire notion of ‘generations’ – is completely made up….

The lack of evidence stems primarily from the fact that there is no research methodology that would allow us to unambiguously identify generations, let alone study whether there are differences between them. We must fall back on theory and logic to parse whether what we see is due to generations or some other phenomenon related to age or the passing of time. In our research, my colleagues and I have suggested that, owing to these limitations, there has never actually been a genuine study of generations.

Generations create a lens through which we interact with others, shaping various forms of social behaviour

Generally, when researchers seek to identify generations, they consider the year in which people were born (their ‘cohort’) as a proxy for their generation. This practice has become well established and is usually not questioned. To form generations using this approach, people are rather arbitrarily grouped together into a band of birth years (for example, members of one generation are born between 19XX and 20YY, whereas members of the next generation are born between 20YY and 20XX, etc). The problem with doing this, especially when people are studied only at a single point in time, is that it is impossible to separate the apparent influence of one’s birth year (being part of a certain ‘generation’) from how old one is at the time of the study. This means that studies that purport to offer evidence for generational differences could just as easily be showing the effects of being a particular age – a 25-year-old is likely to think and act differently than a 45-year-old does, regardless of the ‘generation’ they belong to.

Alternatively, some studies adopt a ‘cross-temporal’ approach to studying generations and attempt to hold the effect of age constant (for example, comparing 18-year-olds surveyed in 1980, 1990, 2000, 2010, etc). The issue with this approach is that any effects of living at a particular time (eg, 2010) – on political attitudes, for example – are now easily misconstrued as effects of having been born in a certain year. As such, we again cannot unambiguously attribute the findings to generational membership. This is a well-known issue. Indeed, nearly every study that has ever tried to investigate generations falls into some form of this trap.

Recently, the National Academies of Sciences, Engineering, and Medicine in the US published the results of a consensus study on the idea of generations and generational differences at work. The conclusions of this study were clear and direct: there is little credible scientific evidence to back up the idea of generations and generational differences, and the (mis)application of these ideas has the potential to detrimentally affect people regardless of their age.

Where does this leave us? Absent evidence, or a valid way of disentangling the complexities of generations through research, what do we do with the concept of generations? Recognising these challenges, we can shift the focus from understanding the supposed natures of generations to understanding the existence and persistence of generational concepts and beliefs. My colleagues and I have advanced the argument that generations exist because they are willed into being. In other words, generations are socially constructed through discourse on ageing in society; they exist because we establish them, label them, ascribe traits to them, and then promote and legitimise them through various media channels (eg, books, magazines, and even film and television), general discourse and through more formalised policy guidance….(More)”

Remove obstacles to sharing health data with researchers outside of the European Union


Heidi Beate Bentzen et al in Nature: “International sharing of pseudonymized personal data among researchers is key to the advancement of health research and is an essential prerequisite for studies of rare diseases or subgroups of common diseases to obtain adequate statistical power.

Pseudonymized personal data are data on which identifiers such as names are replaced by codes. Research institutions keep the ‘code key’ that can link an individual person to the data securely and separately from the research data and thereby protect privacy while preserving the usefulness of data for research. Pseudonymized data are still considered personal data under the General Data Protection Regulation (GDPR) 2016/679 of the European Union (EU) and, therefore, international transfers of such data need to comply with GDPR requirements. Although the GDPR does not apply to transfers of anonymized data, the threshold for anonymity under the GDPR is very high; hence, rendering data anonymous to the level required for exemption from the GDPR can diminish the usefulness of the data for research and is often not even possible.

The GDPR requires that transfers of personal data to international organizations or countries outside the European Economic Area (EEA)—which comprises the EU Member States plus Iceland, Liechtenstein and Norway—be adequately protected. Over the past two years, it has become apparent that challenges emerge for the sharing of data with public-sector researchers in a majority of countries outside of the EEA, as only a few decisions stating that a country offers an adequate level of data protection have so far been issued by the European Commission. This is a problem, for example, with researchers at federal research institutions in the United States. Transfers to international organizations such as the World Health Organization are similarly affected. Because these obstacles ultimately affect patients as beneficiaries of research, solutions are urgently needed. The European scientific academies have recently published a report explaining the consequences of stalled data transfers and pushing for responsible solutions…(More)”.

Philanthropy Can Help Communities Weed Out Inequity in Automated Decision Making Tools


Article by Chris Kingsley and Stephen Plank: “Two very different stories illustrate the impact of sophisticated decision-making tools on individuals and communities. In one, the Los Angeles Police Department publicly abandoned a program that used data to target violent offenders after residents in some neighborhoods were stopped by police as many as 30 times per week. In the other, New York City deployed data to root out landlords who discriminated against tenants using housing vouchers.

The second story shows the potential of automated data tools to promote social good — even as the first illustrates their potential for great harm.

Tools like these — typically described broadly as artificial intelligence or somewhat more narrowly as predictive analytics, which incorporates more human decision making in the data collection process — increasingly influence and automate decisions that affect people’s lives. This includes which families are investigated by child protective services, where police deploy, whether loan officers extend credit, and which job applications a hiring manager receives.

How these tools are built, used, and governed will help shape the opportunities of everyday citizens, for good or ill.

Civil-rights advocates are right to worry about the harm such technology can do by hardpwiring bias into decision making. At the Annie E. Casey Foundation, where we fund and support data-focused efforts, we consulted with civil-rights groups, data scientists, government leaders, and family advocates to learn more about what needs to be done to weed out bias and inequities in automated decision-making tools — and recently produced a report about how to harness their potential to promote equity and social good.

Foundations and nonprofit organizations can play vital roles in ensuring equitable use of A.I. and other data technology. Here are four areas in which philanthropy can make a difference:

Support the development and use of transparent data tools. The public has a right to know how A.I. is being used to influence policy decisions, including whether those tools were independently validated and who is responsible for addressing concerns about how they work. Grant makers should avoid supporting private algorithms whose design and performance are shielded by trade-secrecy claims. Despite calls from advocates, some companies have declined to disclose details that would allow the public to assess their fairness….(More)”

The people’s panopticon: Open-source intelligence comes of age


The Economist: “The great hope of the 1990s and 2000s was that the internet would be a force for openness and freedom. As Stewart Brand, a pioneer of online communities, put it: “Information wants to be free, because the cost of getting it out is getting lower and lower all the time.” It was not to be. Bad information often drove out good. Authoritarian states co-opted the technologies that were supposed to loosen their grip. Information was wielded as a weapon of war. Amid this disappointment one development offers cause for fresh hope: the emerging era of open-source intelligence (osint).

New sensors, from humdrum dashboard cameras to satellites that can see across the electromagnetic spectrum, are examining the planet and its people as never before. The information they collect is becoming cheaper. Satellite images cost several thousand dollars 20 years ago, today they are often provided free and are of incomparably higher quality. A photograph of any spot on Earth, of a stricken tanker or the routes taken by joggers in a city is available with a few clicks. And online communities and collaborative tools, like Slack, enable hobbyists and experts to use this cornucopia of information to solve riddles and unearth misdeeds with astonishing speed.

Human Rights Watch has analysed satellite imagery to document ethnic cleansing in Myanmar. Nanosatellites tag the automatic identification system of vessels that are fishing illegally. Amateur sleuths have helped Europol, the European Union’s policing agency, investigate child sexual exploitation by identifying geographical clues in the background of photographs. Even hedge funds routinely track the movements of company executives in private jets, monitored by a web of amateurs around the world, to predict mergers and acquisitions.

osint thus bolsters civil society, strengthens law enforcement and makes markets more efficient. It can also humble some of the world’s most powerful countries.

In the face of vehement denials from the Kremlin, Bellingcat, an investigative group, meticulously demonstrated Russia’s role in the downing of Malaysian Airlines Flight mh17 over Ukraine in 2014, using little more than a handful of photographs, satellite images and elementary geometry. It went on to identify the Russian agents who attempted to assassinate Sergei Skripal, a former Russian spy, in England in 2018. amateur analysts and journalists used osint to piece together the full extent of Uyghur internment camps in Xinjiang. In recent weeks researchers poring over satellite imagery have spotted China constructing hundreds of nuclear-missile silos in the desert.

Such an emancipation of information promises to have profound effects. The decentralised and egalitarian nature of osint erodes the power of traditional arbiters of truth and falsehood, in particular governments and their spies and soldiers. For those like this newspaper who believe that secrecy can too easily be abused by people in power, osint is welcome….(More)”.

Off-Label: How tech platforms decide what counts as journalism


Essay by Emily Bell: “…But putting a stop to militarized fascist movements—and preventing another attack on a government building—will ultimately require more than content removal. Technology companies need to fundamentally recalibrate how they categorize, promote, and circulate everything under their banner, particularly news. They have to acknowledge their editorial responsibility.

The extraordinary power of tech platforms to decide what material is worth seeing—under the loosest possible definition of who counts as a “journalist”—has always been a source of tension with news publishers. These companies have now been put in the position of being held accountable for developing an information ecosystem based in fact. It’s unclear how much they are prepared to do, if they will ever really invest in pro-truth mechanisms on a global scale. But it is clear that, after the Capitol riot, there’s no going back to the way things used to be.

Between 2016 and 2020, Facebook, Twitter, and Google made dozens of announcements promising to increase the exposure of high-quality news and get rid of harmful misinformation. They claimed to be investing in content moderation and fact-checking; they assured us that they were creating helpful products like the Facebook News Tab. Yet the result of all these changes has been hard to examine, since the data is both scarce and incomplete. Gordon Crovitz—a former publisher of the Wall Street Journal and a cofounder of NewsGuard, which applies ratings to news sources based on their credibility—has been frustrated by the lack of transparency: “In Google, YouTube, Facebook, and Twitter we have institutions that we know all give quality ratings to news sources in different ways,” he told me. “But if you are a news organization and you want to know how you are rated, you can ask them how these systems are constructed, and they won’t tell you.” Consider the mystery behind blue-check certification on Twitter, or the absurdly wide scope of the “Media/News” category on Facebook. “The issue comes down to a fundamental failure to understand the core concepts of journalism,” Crovitz said.

Still, researchers have managed to put together a general picture of how technology companies handle various news sources. According to Jennifer Grygiel, an assistant professor of communications at Syracuse University, “we know that there is a taxonomy within these companies, because we have seen them dial up and dial down the exposure of quality news outlets.” Internally, platforms rank journalists and outlets and make certain designations, which are then used to develop algorithms for personalized news recommendations and news products….(More)”

It’s hard to be a moral person. Technology is making it harder.


Article by Sigal Samuel: “The idea of moral attention goes back at least as far as ancient Greece, where the Stoics wrote about the practice of attention (prosoché) as the cornerstone of a good spiritual life. In modern Western thought, though, ethicists didn’t focus too much on attention until a band of female philosophers came along, starting with Simone Weil.

Weil, an early 20th-century French philosopher and Christian mystic, wrote that “attention is the rarest and purest form of generosity.” She believed that to be able to properly pay attention to someone else — to become fully receptive to their situation in all its complexity — you need to first get your own self out of the way. She called this process “decreation,” and explained: “Attention consists of suspending our thought, leaving it detached, empty … ready to receive in its naked truth the object that is to penetrate it.”

Weil argued that plain old attention — the kind you use when reading novels, say, or birdwatching — is a precondition for moral attention, which is a precondition for empathy, which is a precondition for ethical action.

Later philosophers, like Iris Murdoch and Martha Nussbaum, picked up and developed Weil’s ideas. They garbed them in the language of Western philosophy; Murdoch, for example, appeals to Plato as she writes about the need for “unselfing.” But this central idea of “unselfing” or “decreation” is perhaps most reminiscent of Eastern traditions like Buddhism, which has long emphasized the importance of relinquishing our ego and training our attention so we can perceive and respond to others’ needs. It offers tools like mindfulness meditation for doing just that…(More)”