A Really Bad Blockchain Idea: Digital Identity Cards for Rohingya Refugees


Wayan Vota at ICTworks: “The Rohingya Project claims to be a grassroots initiative that will empower Rohingya refugees with a blockchain-leveraged financial ecosystem tied to digital identity cards….

What Could Possibly Go Wrong?

Concerns about Rohingya data collection are not new, so Linda Raftree‘s Facebook post about blockchain for biometrics started a spirited discussion on this escalation of techno-utopia. Several people put forth great points about the Rohingya Project’s potential failings. For me, there were four key questions originating in the discussion that we should all be debating:

1. Who Determines Ethnicity?

Ethnicity isn’t a scientific way to categorize humans. Ethnic groups are based on human constructs such as common ancestry, language, society, culture, or nationality. Who are the Rohingya Project to be the ones determining who is Rohingya or not? And what is this rigorous assessment they have that will do what science cannot?

Might it be better not to perpetuate the very divisions that cause these issues? Or at the very least, let people self-determine their own ethnicity.

2. Why Digitally Identify Refugees?

Let’s say that we could group a people based on objective metrics. Should we? Especially if that group is persecuted where it currently lives and in many of its surrounding countries? Wouldn’t making a list of who is persecuted be a handy reference for those who seek to persecute more?

Instead, shouldn’t we focus on changing the mindset of the persecutors and stop the persecution?

3. Why Blockchain for Biometrics?

How could linking a highly persecuted people’s biometric information, such as fingerprints, iris scans, and photographs, to a public, universal, and immutable distributed ledger be a good thing?

Might it be highly irresponsible to digitize all that information? Couldn’t that data be used by nefarious actors to perpetuate new and worse exploitation of Rohingya? India has already lost Aadhaar data and the Equafax lost Americans’ data. How will the small, lightly funded Rohingya Project do better?

Could it be possible that old-fashioned paper forms are a better solution than digital identity cards? Maybe laminate them for greater durability, but paper identity cards can be hidden, even destroyed if needed, to conceal information that could be used against the owner.

4. Why Experiment on the Powerless?

Rohingya refugees already suffer from massive power imbalances, and now they’ll be asked to give up their digital privacy, and use experimental technology, as part of an NGO’s experiment, in order to get needed services.

Its not like they’ll have the agency to say no. They are homeless, often penniless refugees, who will probably have no realistic way to opt-out of digital identity cards, even if they don’t want to be experimented on while they flee persecution….(More)”

Is your software racist?


Li Zhou at Politico: “Late last year, a St. Louis tech executive named Emre Şarbak noticed something strange about Google Translate. He was translating phrases from Turkish — a language that uses a single gender-neutral pronoun “o” instead of “he” or “she.” But when he asked Google’s tool to turn the sentences into English, they seemed to read like a children’s book out of the 1950’s. The ungendered Turkish sentence “o is a nurse” would become “she is a nurse,” while “o is a doctor” would become “he is a doctor.”

The website Quartz went on to compose a sort-of poem highlighting some of these phrases; Google’s translation program decided that soldiers, doctors and entrepreneurs were men, while teachers and nurses were women. Overwhelmingly, the professions were male. Finnish and Chinese translations had similar problems of their own, Quartz noted.

What was going on? Google’s Translate tool “learns” language from an existing corpus of writing, and the writing often includes cultural patterns regarding how men and women are described. Because the model is trained on data that already has biases of its own, the results that it spits out serve only to further replicate and even amplify them.

It might seem strange that a seemingly objective piece of software would yield gender-biased results, but the problem is an increasing concern in the technology world. The term is “algorithmic bias” — the idea that artificially intelligent software, the stuff we count on to do everything from power our Netflix recommendations to determine our qualifications for a loan, often turns out to perpetuate social bias.

Voice-based assistants, like Amazon’s Alexa, have struggled to recognize different accents. A Microsoft chatbot on Twitter started spewing racist posts after learning from other users on the platform. In a particularly embarrassing example in 2015, a black computer programmer found that Google’s photo-recognition tool labeled him and a friend as “gorillas.”

Sometimes the results of hidden computer bias are insulting, other times merely annoying. And sometimes the effects are potentially life-changing….(More)”.

Our Hackable Political Future


Henry J. Farrell and Rick Perlstein at the New York Times: “….A program called Face2Face, developed at Stanford, films one person speaking, then manipulates that person’s image to resemble someone else’s. Throw in voice manipulation technology, and you can literally make anyone say anything — or at least seem to….

Another harrowing potential is the ability to trick the algorithms behind self-driving cars to not recognize traffic signs. Computer scientists have shown that nearly invisible changes to a stop sign can fool algorithms into thinking it says yield instead. Imagine if one of these cars contained a dissident challenging a dictator.

In 2007, Barack Obama’s political opponents insisted that footage existed of Michelle Obama ranting against “whitey.” In the future, they may not have to worry about whether it actually existed. If someone called their bluff, they may simply be able to invent it, using data from stock photos and pre-existing footage.

The next step would be one we are already familiar with: the exploitation of the algorithms used by social media sites like Twitter and Facebook to spread stories virally to those most inclined to show interest in them, even if those stories are fake.

It might be impossible to stop the advance of this kind of technology. But the relevant algorithms here aren’t only the ones that run on computer hardware. They are also the ones that undergird our too easily hacked media system, where garbage acquires the perfumed scent of legitimacy with all too much ease. Editors, journalists and news producers can play a role here — for good or for bad.

Outlets like Fox News spread stories about the murder of Democratic staff members and F.B.I. conspiracies to frame the president. Traditional news organizations, fearing that they might be left behind in the new attention economy, struggle to maximize “engagement with content.”

This gives them a built-in incentive to spread informational viruses that enfeeble the very democratic institutions that allow a free media to thrive. Cable news shows consider it their professional duty to provide “balance” by giving partisan talking heads free rein to spout nonsense — or amplify the nonsense of our current president.

It already feels as though we are living in an alternative science-fiction universe where no one agrees on what it true. Just think how much worse it will be when fake news becomes fake video. Democracy assumes that its citizens share the same reality. We’re about to find out whether democracy can be preserved when this assumption no longer holds….(More)”.

Artificial intelligence and privacy


Report by the The Norwegian Data Protection Authority (DPA): “…If people cannot trust that information about them is being handled properly, it may limit their willingness to share information – for example with their doctor, or on social media. If we find ourselves in a situation in which sections of the population refuse to share information because they feel that their personal integrity is being violated, we will be faced with major challenges to our freedom of speech and to people’s trust in the authorities.

A refusal to share personal information will also represent a considerable challenge with regard to the commercial use of such data in sectors such as the media, retail trade and finance services.

About the report

This report elaborates on the legal opinions and the technologies described in the 2014 report «Big Data – privacy principles under pressure». In this report we will provide greater technical detail in describing artificial intelligence (AI), while also taking a closer look at four relevant AI challenges associated with the data protection principles embodied in the GDPR:

  • Fairness and discrimination
  • Purpose limitation
  • Data minimisation
  • Transparency and the right to information

This represents a selection of data protection concerns that in our opinion are most relevance for the use of AI today.

The target group for this report consists of people who work with, or who for other reasons are interested in, artificial intelligence. We hope that engineers, social scientists, lawyers and other specialists will find this report useful….(More) (Download Report)”.

The Qualified Self: Social Media and the Accounting of Everyday Life


Book byLee Humphreys: “Social critiques argue that social media have made us narcissistic, that Facebook, Twitter, Instagram, and YouTube are all vehicles for me-promotion. In The Qualified Self, Lee Humphreys offers a different view. She shows that sharing the mundane details of our lives—what we ate for lunch, where we went on vacation, who dropped in for a visit—didn’t begin with mobile devices and social media. People have used media to catalog and share their lives for several centuries. Pocket diaries, photo albums, and baby books are the predigital precursors of today’s digital and mobile platforms for posting text and images. The ability to take selfies has not turned us into needy narcissists; it’s part of a longer story about how people account for everyday life.

Humphreys refers to diaries in which eighteenth-century daily life is documented with the brevity and precision of a tweet, and cites a nineteenth-century travel diary in which a young woman complains that her breakfast didn’t agree with her. Diaries, Humphreys explains, were often written to be shared with family and friends. Pocket diaries were as mobile as smartphones, allowing the diarist to record life in real time. Humphreys calls this chronicling, in both digital and nondigital forms, media accounting. The sense of self that emerges from media accounting is not the purely statistics-driven “quantified self,” but the more well-rounded qualified self. We come to understand ourselves in a new way through the representations of ourselves that we create to be consumed….(More)”.

Social activism: Engaging millennials in social causes


Michelle I. Seelig at First Monday: “Given that young adults consume and interact with digital technologies not only a daily basis, but extensively throughout the day, it stands to reason they are more actively involved in advocating social change particularly through social media. However, national surveys of civic engagement indicate civic and community engagement drops-off after high school and while millennials attend college. While past research has compiled evidence about young adults’ social media use and some social media behaviors, limited literature has investigated the audience’s perspective of social activism campaigns through social media.

Research also has focused on the adoption of new technologies based on causal linkages between perceived ease of use and perceived usefulness, yet few studies have considered how these dynamics relate to millennials engagement with others using social media for social good. This project builds on past research to investigate the relationship between millennials’ online exposure to information about social causes and motives to take part in virtual and face-to-face engagement.

Findings suggest that while digital media environments immerse participants in mediated experiences that merge both the off-line and online worlds, and has a strong effect on person’s influence to do something, unclear is the extent to which social media and social interactions influence millennials willingness to engage both online and in-person. Even so, the results of this study indicate millennials are open to using social media for social causes, and perhaps increasing engagement off-line too….(More)”.

Reclaiming Civic Spaces


Special edition of  Sur International Journal on Human Rights on crackdowns on civil society around the world: “As shown by both the geographic reach of the contributions (authors from 16 countries) and the infographics to this edition, the issue is clearly of global concern. The first section of the journal seeks to address why this crackdown is happening, who is driving it and whether there is cross-fertilisation of ideas between actors.

The edition then focuses on the strategies that activists are implementing to combat the crackdown. A summary of these strategies can be seen in a video which captures a number of the author activists’ perspectives, shared when they gathered in São Paulo in October 2017 for a writers’ retreat….

The role of new media and online spaces in combatting the crackdown is prevalent in the contributions. The ease and speed with which information can be passed on platforms such as Facebook, Twitter, WhatsApp and Telegram was cited as being important in mobilising support rapidly as well as helping reach previously untapped constituents (Sara AlsherifZoya RehmanRaull SantiagoVictoria OhaeriValerie Msoka and Denise Dora, Ravindran Daniel and Barbara Klugman). Despite the opportunities, Bondita Acharya, Helen Kezie-Nwoha, Sondos Shabayek, Shalini Eddens and Susan JessopSara Alsherif and Zoya Rehman all note the challenges that digital tools present. Harassment of activists online is becoming increasingly common, particularly towards women. In addition, authorities are constantly developing new ways of monitoring these platforms. To combat this, Sara Alsherif describes how developing relationships with tech companies can help activists stay one step ahead of the curve.

The use of video is explored by Hagai El-Ad and Raull Santiago, both of whom describe how the medium is an important tool in capturing the restrictions being inflicted on civil society in their respective contexts. Moreover, Raull Santiago describes how his collective is trying to use these video images, captured by members of his community, in legal processes against the police force….(More)”.

The Follower Factory


 Nicholas Confessore, Gabriel J.X. Dance, Richard Harris And Mark Hansen in The New York Times: “…Fake accounts, deployed by governments, criminals and entrepreneurs, now infest social media networks. By some calculations, as many as 48 million of Twitter’s reported active users — nearly 15 percent — are automated accounts designed to simulate real people, though the company claims that number is far lower.

In November, Facebook disclosed to investors that it had at least twice as many fake users as it previously estimated, indicating that up to 60 million automated accounts may roam the world’s largest social media platform. These fake accounts, known as bots, can help sway advertising audiences and reshape political debates. They can defraud businesses and ruin reputations. Yet their creation and sale fall into a legal gray zone.

“The continued viability of fraudulent accounts and interactions on social media platforms — and the professionalization of these fraudulent services — is an indication that there’s still much work to do,” said Senator Mark Warner, the Virginia Democrat and ranking member of the Senate Intelligence Committee, which has been investigating the spread of fake accounts on Facebook, Twitter and other platforms.

Despite rising criticism of social media companies and growing scrutiny by elected officials, the trade in fake followers has remained largely opaque. While Twitter and other platforms prohibit buying followers, Devumi and dozens of other sites openly sell them. And social media companies, whose market value is closely tied to the number of people using their services, make their own rules about detecting and eliminating fake accounts.

Devumi’s founder, German Calas, denied that his company sold fake followers and said he knew nothing about social identities stolen from real users. “The allegations are false, and we do not have knowledge of any such activity,” Mr. Calas said in an email exchange in November.

The Times reviewed business and court records showing that Devumi has more than 200,000 customers, including reality television stars, professional athletes, comedians, TED speakers, pastors and models. In most cases, the records show, they purchased their own followers. In others, their employees, agents, public relations companies, family members or friends did the buying. For just pennies each — sometimes even less — Devumi offers Twitter followers, views on YouTube, plays on SoundCloud, the music-hosting site, and endorsements on LinkedIn, the professional-networking site….(More)”.

Studying Migrant Assimilation Through Facebook Interests


Antoine DuboisEmilio ZagheniKiran Garimella, and Ingmar Weber at arXiv: “Migrants’ assimilation is a major challenge for European societies, in part because of the sudden surge of refugees in recent years and in part because of long-term demographic trends. In this paper, we use Facebook’s data for advertisers to study the levels of assimilation of Arabic-speaking migrants in Germany, as seen through the interests they express online. Our results indicate a gradient of assimilation along demographic lines, language spoken and country of origin. Given the difficulty to collect timely migration data, in particular for traits related to cultural assimilation, the methods that we develop and the results that we provide open new lines of research that computational social scientists are well-positioned to address….(More)”.

Is Social Media Good or Bad for Democracy?


Essay by Cass R. Sunstein,  as  part of a series by Facebook on social media and democracy: “On balance, the question of whether social media platforms are good for democracy is easy. On balance, they are not merely good; they are terrific. For people to govern themselves, they need to have information. They also need to be able to convey it to others. Social media platforms make that tons easier.

There is a subtler point as well. When democracies are functioning properly, people’s sufferings and challenges are not entirely private matters. Social media platforms help us alert one another to a million and one different problems. In the process, the existence of social media can prod citizens to seek solutions.

Consider the remarkable finding, by the economist Amartya Sen, that in the history of the world, there has never been a famine in a system with a democratic press and free elections. A central reason is that famines are a product not only of a scarcity of food, but also a nation’s failure to provide solutions. When the press is free, and when leaders are elected, leaders have a strong incentive to help.

Mental illness, chronic pain, loss of employment, vulnerability to crime, drugs in the family – information about all these spread via social media, and they can be reduced with sensible policies. When people can talk to each other, and disclose what they know to public officials, the whole world might change in a hurry.

But celebrations can be awfully boring, so let’s hold the applause. Are automobiles good for transportation? Absolutely, but in the United States alone, over 35,000 people died in crashes in 2016.

Social media platforms are terrific for democracy in many ways, but pretty bad in others. And they remain a work-in-progress, not only because of new entrants, but also because the not-so-new ones (including Facebook) continue to evolve. What John Dewey said about my beloved country is true for social media as well: “The United States are not yet made; they are not a finished fact to be categorically assessed.”

For social media and democracy, the equivalents of car crashes include false reports (“fake news”) and the proliferation of information cocoons — and as a result, an increase in fragmentation, polarization and extremism. If you live in an information cocoon, you will believe many things that are false, and you will fail to learn countless things that are true. That’s awful for democracy. And as we have seen, those with specific interests — including politicians and nations, such as Russia, seeking to disrupt democratic processes — can use social media to promote those interests.

This problem is linked to the phenomenon of group polarization — which takes hold when like-minded people talk to one another and end up thinking a more extreme version of what they thought before they started to talk. In fact that’s a common outcome. At best, it’s a problem. At worst, it’s dangerous….(More)”.