How Blockchain can benefit migration programmes and migrants


Solon Ardittis at the Migration Data Portal: “According to a recent report published by CB Insights, there are today at least 36 major industries that are likely to benefit from the use of Blockchain technology, ranging from voting procedures, critical infrastructure security, education and healthcare, to car leasing, forecasting, real estate, energy management, government and public records, wills and inheritance, corporate governance and crowdfunding.

In the international aid sector, a number of experiments are currently being conducted to distribute aid funding through the use of Blockchain and thus to improve the tracing of the ways in which aid is disbursed. Among several other examples, the Start Network, which consists of 42 aid agencies across five continents, ranging from large international organizations to national NGOs, has launched a Blockchain-based project that enables the organization both to speed up the distribution of aid funding and to facilitate the tracing of every single payment, from the original donor to each individual assisted.

As Katherine Purvis of The Guardian noted, “Blockchain enthusiasts are hopeful it could be the next big development disruptor. In providing a transparent, instantaneous and indisputable record of transactions, its potential to remove corruption and provide transparency and accountability is one area of intrigue.”

In the field of international migration and refugee affairs, however, Blockchain technology is still in its infancy.

One of the few notable examples is the launch by the United Nations (UN) World Food Programme (WFP) in May 2017 of a project in the Azraq Refugee Camp in Jordan which, through the use of Blockchain technology, enables the creation of virtual accounts for refugees and the uploading of monthly entitlements that can be spent in the camp’s supermarket through the use of an authorization code. Reportedly, the programme has contributed to a reduction by 98% of the bank costs entailed by the use of a financial service provider.

This is a noteworthy achievement considering that organizations working in international relief can lose up to 3.5% of each aid transaction to various fees and costs and that an estimated 30% of all development funds do not reach their intended recipients because of third-party theft or mismanagement.

At least six other UN agencies including the UN Office for Project Services (UNOPS), the UN Development Programme (UNDP), the UN Children’s Fund (UNICEF), UN Women, the UN High Commissioner for Refugees (UNHCR) and the UN Development Group (UNDG), are now considering Blockchain applications that could help support international assistance, particularly supply chain management tools, self-auditing of payments, identity management and data storage.

The potential of Blockchain technology in the field of migration and asylum affairs should therefore be fully explored.

At the European Union (EU) level, while a Blockchain task force has been established by the European Parliament to assess the ways in which the technology could be used to provide digital identities to refugees, and while the European Commission has recently launched a call for project proposals to examine the potential of Blockchain in a range of sectors, little focus has been placed so far on EU assistance in the field of migration and asylum, both within the EU and in third countries with which the EU has negotiated migration partnership agreements.

This is despite the fact that the use of Blockchain in a number of major programme interventions in the field of migration and asylum could help improve not only their cost-efficiency but also, at least as importantly, their degree of transparency and accountability. This at a time when media and civil society organizations exercise increased scrutiny over the quality and ethical standards of such interventions.

In Europe, for example, Blockchain could help administer the EU Asylum, Migration and Integration Fund (AMIF), both in terms of transferring funds from the European Commission to the eligible NGOs in the Member States and in terms of project managers then reporting on spending. This would help alleviate many of the recurrent challenges faced by NGOs in managing funds in line with stringent EU regulations.

As crucially, Blockchain would have the potential to increase transparency and accountability in the channeling and spending of EU funds in third countries, particularly under the Partnership Framework and other recent schemes to prevent irregular migration to Europe.

A case in point is the administration of EU aid in response to the refugee emergency in Greece where, reportedly, there continues to be insufficient oversight of the full range of commitments and outcomes of large EU-funded investments, particularly in the housing sector. Another example is the set of recent programme interventions in Libya, where a growing number of incidents of human rights abuses and financial mismanagement are being brought to light….(More)”.

Data journalism and the ethics of publishing Twitter data


Matthew L. Williams at Data Driven Journalism: “Collecting and publishing data collected from social media sites such as Twitter are everyday practices for the data journalist. Recent findings from Cardiff University’s Social Data Science Lab question the practice of publishing Twitter content without seeking some form of informed consent from users beforehand. Researchers found that tweets collected around certain topics, such as those related to terrorism, political votes, changes in the law and health problems, create datasets that might contain sensitive content, such as extreme political opinion, grossly offensive comments, overly personal revelations and threats to life (both to oneself and to others). Handling these data in the process of analysis (such as classifying content as hateful and potentially illegal) and reporting has brought the ethics of using social media in social research and journalism into sharp focus.

Ethics is an issue that is becoming increasingly salient in research and journalism using social media data. The digital revolution has outpaced parallel developments in research governance and agreed good practice. Codes of ethical conduct that were written in the mid twentieth century are being relied upon to guide the collection, analysis and representation of digital data in the twenty-first century. Social media is particularly ethically challenging because of the open availability of the data (particularly from Twitter). Many platforms’ terms of service specifically state users’ data that are public will be made available to third parties, and by accepting these terms users legally consent to this. However, researchers and data journalists must interpret and engage with these commercially motivated terms of service through a more reflexive lens, which implies a context sensitive approach, rather than focusing on the legally permissible uses of these data.

Social media researchers and data journalists have experimented with data from a range of sources, including Facebook, YouTube, Flickr, Tumblr and Twitter to name a few. Twitter is by far the most studied of all these networks. This is because Twitter differs from other networks, such as Facebook, that are organised around groups of ‘friends’, in that it is more ‘open’ and the data (in part) are freely available to researchers. This makes Twitter a more public digital space that promotes the free exchange of opinions and ideas. Twitter has become the primary space for online citizens to publicly express their reaction to events of national significance, and also the primary source of data for social science research into digital publics.

The Twitter streaming API provides three levels of data access: the free random 1% that provides ~5M tweets daily and the random 10% and 100% (chargeable or free to academic researchers upon request). Datasets on social interactions of this scale, speed and ease of access have been hitherto unrealisable in the social sciences and journalism, and have led to a flood of journal articles and news pieces, many of which include tweets with full text content and author identity without informed consent. This is presumably because of Twitter’s ‘open’ nature, which leads to the assumption that ‘these are public data’ and using it does not require the rigor and scrutiny of an ethical oversight. Even when these data are scrutinised, journalists don’t need to be convinced by the ‘public data’ argument, due to the lack of a framework to evaluate the potential harms to users. The Social Data Science Lab takes a more ethically reflexive approach to the use of social media data in social research, and carefully considers users’ perceptions, online context and the role of algorithms in estimating potentially sensitive user characteristics.

recent Lab survey conducted into users’ perceptions of the use of their social media posts found the following:

  • 94% were aware that social media companies had Terms of Service
  • 65% had read the Terms of Service in whole or in part
  • 76% knew that when accepting Terms of Service they were giving permission for some of their information to be accessed by third parties
  • 80% agreed that if their social media information is used in a publication they would expect to be asked for consent
  • 90% agreed that if their tweets were used without their consent they should be anonymized…(More)”.

The Social Media Threat to Society and Security


George Soros at Project Syndicate: “It takes significant effort to assert and defend what John Stuart Mill called the freedom of mind. And there is a real chance that, once lost, those who grow up in the digital age – in which the power to command and shape people’s attention is increasingly concentrated in the hands of a few companies – will have difficulty regaining it.

The current moment in world history is a painful one. Open societies are in crisis, and various forms of dictatorships and mafia states, exemplified by Vladimir Putin’s Russia, are on the rise. In the United States, President Donald Trump would like to establish his own mafia-style state but cannot, because the Constitution, other institutions, and a vibrant civil society won’t allow it….

The rise and monopolistic behavior of the giant American Internet platform companies is contributing mightily to the US government’s impotence. These companies have often played an innovative and liberating role. But as Facebook and Google have grown ever more powerful, they have become obstacles to innovation, and have caused a variety of problems of which we are only now beginning to become aware…

Social media companies’ true customers are their advertisers. But a new business model is gradually emerging, based not only on advertising but also on selling products and services directly to users. They exploit the data they control, bundle the services they offer, and use discriminatory pricing to keep more of the benefits that they would otherwise have to share with consumers. This enhances their profitability even further, but the bundling of services and discriminatory pricing undermine the efficiency of the market economy.

Social media companies deceive their users by manipulating their attention, directing it toward their own commercial purposes, and deliberately engineering addiction to the services they provide. This can be very harmful, particularly for adolescents.

There is a similarity between Internet platforms and gambling companies. Casinos have developed techniques to hook customers to the point that they gamble away all of their money, even money they don’t have.

Something similar – and potentially irreversible – is happening to human attention in our digital age. This is not a matter of mere distraction or addiction; social media companies are actually inducing people to surrender their autonomy. And this power to shape people’s attention is increasingly concentrated in the hands of a few companies.

It takes significant effort to assert and defend what John Stuart Mill called the freedom of mind. Once lost, those who grow up in the digital age may have difficulty regaining it.

This would have far-reaching political consequences. People without the freedom of mind can be easily manipulated. This danger does not loom only in the future; it already played an important role in the 2016 US presidential election.

There is an even more alarming prospect on the horizon: an alliance between authoritarian states and large, data-rich IT monopolies, bringing together nascent systems of corporate surveillance with already-developed systems of state-sponsored surveillance. This may well result in a web of totalitarian control the likes of which not even George Orwell could have imagined….(More)”.

Free Speech in the Filter Age


Alexandra Borchardt at Project Syndicate: “In a democracy, the rights of the many cannot come at the expense of the rights of the few. In the age of algorithms, government must, more than ever, ensure the protection of vulnerable voices, even erring on victims’ side at times.

Germany’s Network Enforcement Act – according to which social-media platforms like Facebook and YouTube could be fined €50 million ($63 million) for every “obviously illegal” post within 24 hours of receiving a notification – has been controversial from the start. After it entered fully into effect in January, there was a tremendous outcry, with critics from all over the political map arguing that it was an enticement to censorship. Government was relinquishing its powers to private interests, they protested.

So, is this the beginning of the end of free speech in Germany?

Of course not. To be sure, Germany’s Netzwerkdurchsetzungsgesetz (or NetzDG) is the strictest regulation of its kind in a Europe that is growing increasingly annoyed with America’s powerful social-media companies. And critics do have some valid points about the law’s weaknesses. But the possibilities for free expression will remain abundant, even if some posts are deleted mistakenly.

The truth is that the law sends an important message: democracies won’t stay silent while their citizens are exposed to hateful and violent speech and images – content that, as we know, can spur real-life hate and violence. Refusing to protect the public, especially the most vulnerable, from dangerous content in the name of “free speech” actually serves the interests of those who are already privileged, beginning with the powerful companies that drive the dissemination of information.

Speech has always been filtered. In democratic societies, everyone has the right to express themselves within the boundaries of the law, but no one has ever been guaranteed an audience. To have an impact, citizens have always needed to appeal to – or bypass – the “gatekeepers” who decide which causes and ideas are relevant and worth amplifying, whether through the media, political institutions, or protest.

The same is true today, except that the gatekeepers are the algorithms that automatically filter and rank all contributions. Of course, algorithms can be programmed any way companies like, meaning that they may place a premium on qualities shared by professional journalists: credibility, intelligence, and coherence.

But today’s social-media platforms are far more likely to prioritize potential for advertising revenue above all else. So the noisiest are often rewarded with a megaphone, while less polarizing, less privileged voices are drowned out, even if they are providing the smart and nuanced perspectives that can truly enrich public discussions….(More)”.

Invisible Algorithms, Invisible Politics


Laura Forlano at Public Books: “Over the past several decades, politicians and business leaders, technology pundits and the mainstream media, engineers and computer scientists—as well as science fiction and Hollywood films—have repeated a troubling refrain, championing the shift away from the material and toward the virtual, the networked, the digital, the online. It is as if all of life could be reduced to 1s and 0s, rendering it computable….

Today, it is in design criteria and engineering specifications—such as “invisibility” and “seamlessness,” which aim to improve the human experience with technology—that ethical decisions are negotiated….

Take this example. In late July 2017, the City of Chicago agreed to settle a $38.75 million class-action lawsuit related to its red-light-camera program. Under the settlement, the city will repay drivers who were unfairly ticketed a portion of the cost of their ticket. Over the past five years, the program, ostensibly implemented to make Chicago’s intersections safer, has been mired in corruption, bribery, mismanagement, malfunction, and moral wrongdoing. This confluence of factors has resulted in a great deal of negative press about the project.

The red-light-camera program is just one of many examples of such technologies being adopted by cities in their quest to become “smart” and, at the same time, increase revenue. Others include ticketless parking, intelligent traffic management, ride-sharing platforms, wireless networks, sensor-embedded devices, surveillance cameras, predictive policing software, driverless car testbeds, and digital-fabrication facilities.

The company that produced the red-light cameras, Redflex, claims on their website that their technology can “reliably and consistently address negative driving behaviors and effectively enforce traffic laws on roadways and intersections with a history of crashes and incidents.”Nothing could be further from the truth. Instead, the cameras were unnecessarily installed at some intersections without a history of problems; they malfunctioned; they issued illegal tickets due to short yellow-lights that were not within federal limits; and they issued tickets after enforcement hours. And, due to existing structural inequalities, these difficulties were more likely to negatively impact poorer and less advantaged city residents.

The controversies surrounding red-light cameras in Chicago make visible the ways in which design criteria and engineering specifications—concepts including safety and efficiency, seamlessness and stickiness, convenience and security—are themselves ways of defining the ethics, values, and politics of our cities and citizens. To be sure, these qualities seem clean, comforting, and cuddly at first glance. They are difficult to argue against.

But, like wolves in sheep’s clothing, they gnash their political-economic teeth, and show their insatiable desire to further the goals of neoliberal capitalism. Rather than merely slick marketing, these mundane infrastructures (hardware, software, data, and services) negotiate ethical questions around what kinds of societies we aspire to, what kind of cities we want to live in, what kinds of citizens we can become, who will benefit from these tradeoffs, and who will be left out….(More)

Republics of Makers: From the Digital Commons to a Flat Marginal Cost Society


Mario Carpo at eFlux: “…as the costs of electronic computation have been steadily decreasing for the last forty years at least, many have recently come to the conclusion that, for most practical purposes, the cost of computation is asymptotically tending to zero. Indeed, the current notion of Big Data is based on the assumption that an almost unlimited amount of digital data will soon be available at almost no cost, and similar premises have further fueled the expectation of a forthcoming “zero marginal costs society”: a society where, except for some upfront and overhead costs (the costs of building and maintaining some facilities), many goods and services will be free for all. And indeed, against all odds, an almost zero marginal cost society is already a reality in the case of many services based on the production and delivery of electricity: from the recording, transmission, and processing of electrically encoded digital information (bits) to the production and consumption of electrical power itself. Using renewable energies (solar, wind, hydro) the generation of electrical power is free, except for the cost of building and maintaining installations and infrastructure. And given the recent progress in the micro-management of intelligent electrical grids, it is easy to imagine that in the near future the cost of servicing a network of very small, local hydro-electric generators, for example, could easily be devolved to local communities of prosumers who would take care of those installations as their tend to their living environment, on an almost voluntary, communal basis.4 This was already often the case during the early stages of electrification, before the rise of AC (alternate current, which, unlike DC, or direct current, could be carried over long distances): AC became the industry’s choice only after Galileo Ferraris’s and Nikola Tesla’s developments in AC technologies in the 1880s.

Likewise, at the micro-scale of the electronic production and processing of bits and bytes of information, the Open Source movement and the phenomenal surge of some crowdsourced digital media (including some so-called social media) in the first decade of the twenty-first century has already proven that a collaborative, zero cost business model can effectively compete with products priced for profit on a traditional marketplace. As the success of Wikipedia, Linux, or Firefox proves, many are happy to volunteer their time and labor for free when all can profit from the collective work of an entire community without having to pay for it. This is now technically possible precisely because the fixed costs of building, maintaining, and delivering these service are very small; hence, from the point of view of the end-user, negligible.

Yet, regardless of the fixed costs of the infrastructure, content—even user-generated content—has costs, albeit for the time being these are mostly hidden, voluntarily born, or inadvertently absorbed by the prosumers themselves. For example, the wisdom of Wikipedia is not really a wisdom of crowds: most Wikipedia entries are de facto curated by fairly traditional scholar communities, and these communities can contribute their expertise for free only because their work has already been paid for by others—often by universities. In this sense, Wikipedia is only piggybacking on someone else’s research investments (but multiplying their outreach, which is one reason for its success). Ditto for most Open Source software, as training a software engineer, coder, or hacker, takes time and money—an investment for future returns that in many countries around the world is still born, at least in part, by public institutions….(More)”.

Crowdsourcing Judgments of News Source Quality


Paper by Gordon Pennycook and David G. Rand: “The spread of misinformation and disinformation, especially on social media, is a major societal challenge. Here, we assess whether crowdsourced ratings of trust in news sources can effectively differentiate between more and less reliable sources. To do so, we ran a preregistered experiment (N = 1,010 from Amazon Mechanical Turk) in which individuals rated familiarity with, and trust in, 60 news sources from three categories: 1) Mainstream media outlets, 2) Websites that produce hyper-partisan coverage of actual facts, and 3) Websites that produce blatantly false content (“fake news”).

Our results indicate that, despite substantial partisan bias, laypeople across the political spectrum rate mainstream media outlets as far more trustworthy than either hyper-partisan or fake news sources (all but 1 mainstream source, Salon, was rated as more trustworthy than every hyper-partisan or fake news source when equally weighting ratings of Democrats and Republicans).

Critically, however, excluding ratings from participants who are not familiar with a given news source dramatically reduces the difference between mainstream media sources and hyper-partisan or fake news sites. For example, 30% of the mainstream media websites (Salon, the Guardian, Fox News, Politico, Huffington Post, and Newsweek) received lower trust scores than the most trusted fake news site (news4ktla.com) when excluding unfamiliar ratings.

This suggests that rather than being initially agnostic about unfamiliar sources, people are initially skeptical – and thus a lack of familiarity is an important cue for untrustworthiness. Overall, our findings indicate that crowdsourcing media trustworthiness judgments is a promising approach for fighting misinformation and disinformation online, but that trustworthiness ratings from participants who are unfamiliar with a given source should not be ignored….(More)”.

Citizens Coproduction, Service Self-Provision and the State 2.0


Chapter by Walter Castelnovo in Network, Smart and Open: “Citizens’ engagement and citizens’ participation are rapidly becoming catch-all concepts, buzzwords continuously recurring in public policy discourses, also due to the widespread diffusion and use of social media that are claimed to have the potential to increase citizens’ participation in public sector processes, including policy development and policy implementation.

By assuming the concept of co-production as the lens through which to look at citizen’s participation in civic life, the paper shows how, when supported by a real redistribution of power between government and citizens, citizens’ participation can determine a transformational impact on the same nature of government, up to the so called ‘Do It Yourself government’ and ‘user-generated state’. Based on a conceptual research approach and with reference to the relevant literature, the paper discusses what such transformation could amount to and what role ICTs (social media) can play in the government transformation processes….(More)”.

Feasibility Study of Using Crowdsourcing to Identify Critical Affected Areas for Rapid Damage Assessment: Hurricane Matthew Case Study


Paper by Faxi Yuan and Rui Liu at the International Journal of Disaster Risk Reduction: “…rapid damage assessment plays a critical role in crisis management. Collection of timely information for rapid damage assessment is particularly challenging during natural disasters. Remote sensing technologies were used for data collection during disasters. However, due to the large areas affected by major disasters such as Hurricane Matthew, specific data cannot be collected in time such as the location information.

Social media can serve as a crowdsourcing platform for citizens’ communication and information sharing during natural disasters and provide the timely data for identifying affected areas to support rapid damage assessment during disasters. Nevertheless, there is very limited existing research on the utility of social media data in damage assessment. Even though some investigation of the relationship between social media activities and damages was conducted, the employment of damage-related social media data in exploring the fore-mentioned relationship remains blank.

This paper for the first time, establishes the index dictionary by semantic analysis for the identification of damage-related tweets posted during Hurricane Matthew in Florida. Meanwhile, the insurance claim data from the publication of Florida Office of Insurance Regulation is used as a representative of real hurricane damage data in Florida. This study performs a correlation analysis and a comparative analysis of the geographic distribution of social media data and damage data at the county level in Florida. We find that employing social media data to identify critical affected areas at the county level during disasters is viable. Damage data has a closer relationship with damage-related tweets than disaster-related tweets….(More)”.

 

Algorithms of Oppression: How Search Engines Reinforce Racism


Book by Safiya Umoja Noble: “Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society.
In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.
Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.
An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century….(More)”.