Federal Agencies Use Cellphone Location Data for Immigration Enforcement


Byron Tau and Michelle Hackman at the Wall Street Journal: “The Trump administration has bought access to a commercial database that maps the movements of millions of cellphones in America and is using it for immigration and border enforcement, according to people familiar with the matter and documents reviewed by The Wall Street Journal.

The location data is drawn from ordinary cellphone apps, including those for games, weather and e-commerce, for which the user has granted permission to log the phone’s location.

The Department of Homeland Security has used the information to detect undocumented immigrants and others who may be entering the U.S. unlawfully, according to these people and documents.

U.S. Immigration and Customs Enforcement, a division of DHS, has used the data to help identify immigrants who were later arrested, these people said. U.S. Customs and Border Protection, another agency under DHS, uses the information to look for cellphone activity in unusual places, such as remote stretches of desert that straddle the Mexican border, the people said.

The federal government’s use of such data for law enforcement purposes hasn’t previously been reported.

Experts say the information amounts to one of the largest known troves of bulk data being deployed by law enforcement in the U.S.—and that the use appears to be on firm legal footing because the government buys access to it from a commercial vendor, just as a private company could, though its use hasn’t been tested in court.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws.

According to federal spending contracts, a division of DHS that creates experimental products began buying location data in 2017 from Venntel Inc. of Herndon, Va., a small company that shares several executives and patents with Gravy Analytics, a major player in the mobile-advertising world.

In 2018, ICE bought $190,000 worth of Venntel licenses. Last September, CBP bought $1.1 million in licenses for three kinds of software, including Venntel subscriptions for location data. 

The Department of Homeland Security and its components acknowledged buying access to the data, but wouldn’t discuss details about how they are using it in law-enforcement operations. People familiar with some of the efforts say it is used to generate investigative leads about possible illegal border crossings and for detecting or tracking migrant groups.

CBP has said it has privacy protections and limits on how it uses the location information. The agency says that it accesses only a small amount of the location data and that the data it does use is anonymized to protect the privacy of Americans….(More)”

Enchanted Determinism: Power without Responsibility in Artificial Intelligence


Paper by Alexander Campolo and Kate Crawford: “Deep learning techniques are growing in popularity within the field of artificial intelligence (AI). These approaches identify patterns in large scale datasets, and make classifications and predictions, which have been celebrated as more accurate than those of humans. But for a number of reasons, including nonlinear path from inputs to outputs, there is a dearth of theory that can explain why deep learning techniques work so well at pattern detection and prediction. Claims about “superhuman” accuracy and insight, paired with the inability to fully explain how these results are produced, form a discourse about AI that we call enchanted determinism. To analyze enchanted determinism, we situate it within a broader epistemological diagnosis of modernity: Max Weber’s theory of disenchantment. Deep learning occupies an ambiguous position in this framework. On one hand, it represents a complex form of technological calculation and prediction, phenomena Weber associated with disenchantment.

On the other hand, both deep learning experts and observers deploy enchanted, magical discourses to describe these systems’ uninterpretable mechanisms and counter-intuitive behavior. The combination of predictive accuracy and mysterious or unexplainable properties results in myth-making about deep learning’s transcendent, superhuman capacities, especially when it is applied in social settings. We analyze how discourses of magical deep learning produce techno-optimism, drawing on case studies from game-playing, adversarial examples, and attempts to infer sexual orientation from facial images. Enchantment shields the creators of these systems from accountability while its deterministic, calculative power intensifies social processes of classification and control….(More)”.

Housing Search in the Age of Big Data: Smarter Cities or the Same Old Blind Spots?


Paper by Geoff Boeing et al: “Housing scholars stress the importance of the information environment in shaping housing search behavior and outcomes. Rental listings have increasingly moved online over the past two decades and, in turn, online platforms like Craigslist are now central to the search process. Do these technology platforms serve as information equalizers or do they reflect traditional information inequalities that correlate with neighborhood sociodemographics? We synthesize and extend analyses of millions of US Craigslist rental listings and find they supply significantly different volumes, quality, and types of information in different communities.

Technology platforms have the potential to broaden, diversify, and equalize housing search information, but they rely on landlord behavior and, in turn, likely will not reach this potential without a significant redesign or policy intervention. Smart cities advocates hoping to build better cities through technology must critically interrogate technology platforms and big data for systematic biases….(More)”.

Whose Side are Ethics Codes On?


Paper by Anne L. Washington and Rachel S. Kuo: “The moral authority of ethics codes stems from an assumption that they serve a unified society, yet this ignores the political aspects of any shared resource. The sociologist Howard S. Becker challenged researchers to clarify their power and responsibility in the classic essay: Whose Side Are We On. Building on Becker’s hierarchy of credibility, we report on a critical discourse analysis of data ethics codes and emerging conceptualizations of beneficence, or the “social good”, of data technology. The analysis revealed that ethics codes from corporations and professional associations conflated consumers with society and were largely silent on agency. Interviews with community organizers about social change in the digital era supplement the analysis, surfacing the limits of technical solutions to concerns of marginalized communities. Given evidence that highlights the gulf between the documents and lived experiences, we argue that ethics codes that elevate consumers may simultaneously subordinate the needs of vulnerable populations. Understanding contested digital resources is central to the emerging field of public interest technology. We introduce the concept of digital differential vulnerability to explain disproportionate exposures to harm within data technology and suggest recommendations for future ethics codes….(More)”.

International Humanitarian and Development Aid and Big Data Governance


Chapter by Andrej Zwitter: “Modern technology and innovations constantly transform the world. This also applies to humanitarian action and development aid, for example: humanitarian drones, crowd sourcing of information, or the utility of Big Data in crisis analytics and humanitarian intelligence. The acceleration of modernization in these adjacent fields can in part be attributed to new partnerships between aid agencies and new private stakeholders that increasingly become active, such as individual crisis mappers, mobile telecommunication companies, or technological SMEs.

These partnerships, however, must be described as simultaneously beneficial as well as problematic. Many private actors do not subscribe to the humanitarian principles (humanity, impartiality, independence, and neutrality), which govern UN and NGO operations, or are not even aware of them. Their interests are not solely humanitarian, but may include entrepreneurial agendas. The unregulated use of data in humanitarian intelligence has already caused negative consequences such as the exposure of sensitive data about aid agencies and of victims of disasters.

This chapter investigates the emergent governance trends around data innovation in the humanitarian and development field. It takes a look at the ways in which the field tries to regulate itself and the utility of the humanitarian principles for Big Data analytics and data-driven innovation. It will argue that it is crucially necessary to formulate principles for data governance in the humanitarian context in order to ensure the safeguarding of beneficiaries that are particularly vulnerable. In order to do that, the chapter proposes to reinterpret the humanitarian principles to accommodate the new reality of datafication of different aspects of society…(More)”.

Assessing the Returns on Investment in Data Openness and Transparency


Paper by Megumi Kubota and Albert Zeufack: “This paper investigates the potential benefits for a country from investing in data transparency. The paper shows that increased data transparency can bring substantive returns in lower costs of external borrowing.

This result is obtained by estimating the impact of public data transparency on sovereign spreads conditional on the country’s level of institutional quality and public and external debt. While improving data transparency alone reduces the external borrowing costs for a country, the return is much higher when combined with stronger institutional quality and lower public and external debt. Similarly, the returns on investing in data transparency are higher when a country’s integration to the global economy deepens, as captured by trade and financial openness.

Estimation of an instrumental variable regression shows that Sub-Saharan African countries could have saved up to 14.5 basis points in sovereign bond spreads and decreased their external debt burden by US$405.4 million (0.02 percent of gross domestic product) in 2018, if their average level of data transparency was that of a country in the top quartile of the upper-middle-income country category. At the country level, Angola could have reduced its external debt burden by around US$73.6 million….(More)”.

The New City Regulators: Platform and Public Values in Smart and Sharing Cities


Paper by Sofia Ranchordás and Catalina Goanta: “Cities are increasingly influenced by novel and cosmopolitan values advanced by transnational technology providers and digital platforms. These values which are often visible in the advancement of the sharing economy and smart cities, may differ from the traditional public values protected by national and local laws and policies. This article contrasts the public values created by digital platforms in cities with the democratic and social national values that the platform society is leaving behind.

It innovates by showing how co-regulation can balance public values with platform values. In this article, we argue that despite the value-creation benefits produced by the digital platforms under analysis, public authorities should be aware of the risks of technocratic discourses and potential conflicts between platform and local values. In this context, we suggest a normative framework which enhances the need for a new kind of knowledge-service creation in the form of local public-interest technology. Moreover, our framework proposes a negotiated contractual system that seeks to balance platform values with public values in an attempt to address the digital enforcement problem driven by the functional sovereignty role of platforms….(More)”.

News as Surveillance


Paper by Erin Carroll: “As inhabitants of the Information Age, we are increasingly aware of the amount and kind of data that technology platforms collect on us. Far less publicized, however, is how much data news organizations collect on us as we read the news online and how they allow third parties to collect that personal data as well. A handful of studies by computer scientists reveal that, as a group, news websites are among the Internet’s worst offenders when it comes to tracking their visitors.

On the one hand, this surveillance is unsurprising. It is capitalism at work. The press’s business model has long been advertising-based. Yet, today this business model raises particular First Amendment concerns. The press, a named beneficiary of the First Amendment and a First Amendment institution, is gathering user reading history. This is a violation of what legal scholars call “intellectual privacy”—a right foundational to our First Amendment free speech rights.

And because of the perpetrator, this surveillance has the potential to cause far-reaching harms. Not only does it injure the individual reader or citizen, it injures society. News consumption helps each of us engage in the democratic process. It is, in fact, practically a prerequisite to our participation. Moreover, for an institution whose success is dependent on its readers’ trust, one that checks abuses of power, this surveillance seems like a special brand of betrayal.

Rather than an attack on journalists or journalism, this Essay is an attack on a particular press business model. It is also a call to grapple with it before the press faces greater public backlash. Originally given as the keynote for the Washburn Law Journal’s symposium, The Future of Cyber Speech, Media, and Privacy, this Essay argues for transforming and diversifying press business models and offers up other suggestions for minimizing the use of news as surveillance…(More)”.

Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI


Paper by Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika: “The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these “AI principles,” there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.

To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.

Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus…(More)”.

Why It’s So Hard for Users to Control Their Data


Bhaskar Chakravorti at the Harvard Business Review: “A recent IBM study found that 81% of consumers say they have become more concerned about how their data is used online. But most users continue to hand over their data online and tick consent boxes impatiently, giving rise to a “privacy paradox,” where users’ concerns aren’t reflected in their behaviors. It’s a daunting challenge for regulators and companies alike to navigate the future of data governance.

In my view, we’re missing a system that defines and grants users digital agency” — the ability to own the rights to their personal data, manage access to this data and, potentially, be compensated fairly for such access. This would make data similar to other forms of personal property: a home, a bank account or even a mobile phone number. But before we can imagine such a state, we need to examine three central questions: Why don’t users care enough to take actions that match their concerns? What are the possible solutions? Why is this so difficult?

Why don’t users’ actions match their concerns?

To start, data is intangible. We don’t actively hand it over. As a byproduct of our online activity, it is easy to ignore or forget about. A lot of data harvesting is invisible to the consumer — they see the results in marketing offers, free services, customized feeds, tailored ads, and beyond.

Second, even if users wanted to negotiate more data agency, they have little leverage. Normally, in well-functioning markets, customers can choose from a range of competing providers. But this is not the case if the service is a widely used digital platform. For many, leaving a platform like Facebook feels like it would come at a high cost in terms of time and effort and that they have no other option for an equivalent service with connections to the same people. Plus, many people use their Facebook logins on numerous apps and services. On top of that, Facebook has bought up many of its natural alternatives, like Instagram. It’s equally hard to switch away from other major platforms, like Google or Amazon, without a lot of personal effort.

Third, while a majority of American users believe more regulation is needed, they are not as enthusiastic about broad regulatory solutions being imposed. Instead, they would prefer to have better data management tools at their disposal. However, managing one’s own data would be complex – and that would deter users from embracing such an option….(More)”.