Federal Agencies Use Cellphone Location Data for Immigration Enforcement


Byron Tau and Michelle Hackman at the Wall Street Journal: “The Trump administration has bought access to a commercial database that maps the movements of millions of cellphones in America and is using it for immigration and border enforcement, according to people familiar with the matter and documents reviewed by The Wall Street Journal.

The location data is drawn from ordinary cellphone apps, including those for games, weather and e-commerce, for which the user has granted permission to log the phone’s location.

The Department of Homeland Security has used the information to detect undocumented immigrants and others who may be entering the U.S. unlawfully, according to these people and documents.

U.S. Immigration and Customs Enforcement, a division of DHS, has used the data to help identify immigrants who were later arrested, these people said. U.S. Customs and Border Protection, another agency under DHS, uses the information to look for cellphone activity in unusual places, such as remote stretches of desert that straddle the Mexican border, the people said.

The federal government’s use of such data for law enforcement purposes hasn’t previously been reported.

Experts say the information amounts to one of the largest known troves of bulk data being deployed by law enforcement in the U.S.—and that the use appears to be on firm legal footing because the government buys access to it from a commercial vendor, just as a private company could, though its use hasn’t been tested in court.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws.

According to federal spending contracts, a division of DHS that creates experimental products began buying location data in 2017 from Venntel Inc. of Herndon, Va., a small company that shares several executives and patents with Gravy Analytics, a major player in the mobile-advertising world.

In 2018, ICE bought $190,000 worth of Venntel licenses. Last September, CBP bought $1.1 million in licenses for three kinds of software, including Venntel subscriptions for location data. 

The Department of Homeland Security and its components acknowledged buying access to the data, but wouldn’t discuss details about how they are using it in law-enforcement operations. People familiar with some of the efforts say it is used to generate investigative leads about possible illegal border crossings and for detecting or tracking migrant groups.

CBP has said it has privacy protections and limits on how it uses the location information. The agency says that it accesses only a small amount of the location data and that the data it does use is anonymized to protect the privacy of Americans….(More)”

Housing Search in the Age of Big Data: Smarter Cities or the Same Old Blind Spots?


Paper by Geoff Boeing et al: “Housing scholars stress the importance of the information environment in shaping housing search behavior and outcomes. Rental listings have increasingly moved online over the past two decades and, in turn, online platforms like Craigslist are now central to the search process. Do these technology platforms serve as information equalizers or do they reflect traditional information inequalities that correlate with neighborhood sociodemographics? We synthesize and extend analyses of millions of US Craigslist rental listings and find they supply significantly different volumes, quality, and types of information in different communities.

Technology platforms have the potential to broaden, diversify, and equalize housing search information, but they rely on landlord behavior and, in turn, likely will not reach this potential without a significant redesign or policy intervention. Smart cities advocates hoping to build better cities through technology must critically interrogate technology platforms and big data for systematic biases….(More)”.

Whose Side are Ethics Codes On?


Paper by Anne L. Washington and Rachel S. Kuo: “The moral authority of ethics codes stems from an assumption that they serve a unified society, yet this ignores the political aspects of any shared resource. The sociologist Howard S. Becker challenged researchers to clarify their power and responsibility in the classic essay: Whose Side Are We On. Building on Becker’s hierarchy of credibility, we report on a critical discourse analysis of data ethics codes and emerging conceptualizations of beneficence, or the “social good”, of data technology. The analysis revealed that ethics codes from corporations and professional associations conflated consumers with society and were largely silent on agency. Interviews with community organizers about social change in the digital era supplement the analysis, surfacing the limits of technical solutions to concerns of marginalized communities. Given evidence that highlights the gulf between the documents and lived experiences, we argue that ethics codes that elevate consumers may simultaneously subordinate the needs of vulnerable populations. Understanding contested digital resources is central to the emerging field of public interest technology. We introduce the concept of digital differential vulnerability to explain disproportionate exposures to harm within data technology and suggest recommendations for future ethics codes….(More)”.

Astroturfing Is Bad But It's Not the Whole Problem


Beth Noveck at NextGov: “In November 2019, Securities and Exchange Commission Chairman Jay Clayton boasted that draft regulations requiring proxy advisors to run their recommendations past the companies they are evaluating before giving that advice to their clients received dozens of letters of support from ordinary Americans. But the letters he cited turned out to be fakes, sent by corporate advocacy groups and signed with the names of people who never saw the comments or who do not exist at all.

When interest groups manufacture the appearance that comments come from the “ordinary public,” it’s known as astroturfing. The practice is the subject of today’s House Committee on Financial Services Subcommittee on Oversight and Investigations hearing, entitled “Fake It till They Make It: How Bad Actors Use Astroturfing to Manipulate Regulators, Disenfranchise Consumers, and Subvert the Rulemaking Process.” 

Of course, commissioners who cherry-pick from among the public comments looking for the information to prove themselves right should be called out and it is tempting to use the occasion to embarrass those who do, especially when they are from the other party. But focusing on astroturfing distracts attention away from the more salient and urgent problem: the failure to obtain the best possible evidence by creating effective public participation opportunities in federal rulemaking. 

Thousands of federal regulations are enacted every year that touch every aspect of our lives, and under the 1946 Administrative Procedure Act, the public has a right to participate.

Participation in rulemaking advances both the legitimacy and the quality of regulations by enabling agencies—and the congressional committees that oversee them—to obtain information from a wider audience of stakeholders, interest groups, businesses, nonprofits, academics and interested individuals. Participation also provides a check on the rulemaking process, helping to ensure public scrutiny.

But the shift over the last two decades to a digital process, where people submit comments via regulations.gov has made commenting easier yet also inadvertently opened the floodgates to voluminous, duplicative and, yes, even “fake” comments, making it harder for agencies to extract the information needed to inform the rulemaking process.

Although many agencies receive only a handful of comments, some receive voluminous responses, thanks to this ease of digital commenting. In 2017, when the Federal Communications Commission sought to repeal an earlier Obama-era rule requiring internet service providers to observe net neutrality, the agency received 22 million comments in response. 

There is a remedy. Tools have evolved to make quick work of large data stores….(More)”. See also https://congress.crowd.law/

An Internet for the People: The Politics and Promise of craigslist


Book by Jessa Lingel: “Begun by Craig Newmark as an e-mail to some friends about cool events happening around San Francisco, craigslist is now the leading classifieds service on the planet. It is also a throwback to the early internet. The website has barely seen an upgrade since it launched in 1996. There are no banner ads. The company doesn’t profit off your data. An Internet for the People explores how people use craigslist to buy and sell, find work, and find love—and reveals why craigslist is becoming a lonely outpost in an increasingly corporatized web.

Drawing on interviews with craigslist insiders and ordinary users, Jessa Lingel looks at the site’s history and values, showing how it has mostly stayed the same while the web around it has become more commercial and far less open. She examines craigslist’s legal history, describing the company’s courtroom battles over issues of freedom of expression and data privacy, and explains the importance of locality in the social relationships fostered by the site. More than an online garage sale, job board, or dating site, craigslist holds vital lessons for the rest of the web. It is a website that values user privacy over profits, ease of use over slick design, and an ethos of the early web that might just hold the key to a more open, transparent, and democratic internet….(More)”.

News as Surveillance


Paper by Erin Carroll: “As inhabitants of the Information Age, we are increasingly aware of the amount and kind of data that technology platforms collect on us. Far less publicized, however, is how much data news organizations collect on us as we read the news online and how they allow third parties to collect that personal data as well. A handful of studies by computer scientists reveal that, as a group, news websites are among the Internet’s worst offenders when it comes to tracking their visitors.

On the one hand, this surveillance is unsurprising. It is capitalism at work. The press’s business model has long been advertising-based. Yet, today this business model raises particular First Amendment concerns. The press, a named beneficiary of the First Amendment and a First Amendment institution, is gathering user reading history. This is a violation of what legal scholars call “intellectual privacy”—a right foundational to our First Amendment free speech rights.

And because of the perpetrator, this surveillance has the potential to cause far-reaching harms. Not only does it injure the individual reader or citizen, it injures society. News consumption helps each of us engage in the democratic process. It is, in fact, practically a prerequisite to our participation. Moreover, for an institution whose success is dependent on its readers’ trust, one that checks abuses of power, this surveillance seems like a special brand of betrayal.

Rather than an attack on journalists or journalism, this Essay is an attack on a particular press business model. It is also a call to grapple with it before the press faces greater public backlash. Originally given as the keynote for the Washburn Law Journal’s symposium, The Future of Cyber Speech, Media, and Privacy, this Essay argues for transforming and diversifying press business models and offers up other suggestions for minimizing the use of news as surveillance…(More)”.

10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade


Future of Privacy Forum: “Today, FPF is publishing a white paper co-authored by CEO Jules Polonetsky and hackylawyER Founder Elizabeth Renieris to help corporate officers, nonprofit leaders, and policymakers better understand privacy risks that will grow in prominence during the 2020s, as well as rising technologies that will be used to help manage privacy through the decade. Leaders must understand the basics of technologies like biometric scanning, collaborative robotics, and spatial computing in order to assess how existing and proposed policies, systems, and laws will address them, and to support appropriate guidance for the implementation of new digital products and services.

The white paper, Privacy 2020: 10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade, identifies ten technologies that are likely to create increasingly complex data protection challenges. Over the next decade, privacy considerations will be driven by innovations in tech linked to human bodies, health, and social networks; infrastructure; and computing power. The white paper also highlights ten developments that can enhance privacy – providing cause for optimism that organizations will be able to manage data responsibly. Some of these technologies are already in general use, some will soon be widely deployed, and others are nascent….(More)”.

Incentive Competitions and the Challenge of Space Exploration


Article by Matthew S. Williams: “Bill Joy, the famed computer engineer who co-founded Sun Microsystems in 1982, once said, “No matter who you are, most of the smartest people work for someone else.” This has come to be known as “Joy’s Law” and is one of the inspirations for concepts such as “crowdsourcing”.

Increasingly, government agencies, research institutions, and private companies are looking to the power of the crowd to find solutions to problems. Challenges are created and prizes offered – that, in basic terms, is an “incentive competition.”

The basic idea of an incentive competition is pretty straightforward. When confronted with a particularly daunting problem, you appeal to the general public to provide possible solutions and offer a reward for the best one. Sounds simple, doesn’t it?

But in fact, this concept flies in the face of conventional problem-solving, which is for companies to recruit people with knowledge and expertise and solve all problems in-house. This kind of thinking underlies most of our government and business models, but has some significant limitations….

Another benefit to crowdsourcing is the way it takes advantage of the exponential growth in human population in the past few centuries. Between 1650 and 1800, the global population doubled, to reach about 1 billion. It took another one-hundred and twenty years (1927) before it doubled again to reach 2 billion.

However, it only took fifty-seven years for the population to double again and reach 4 billion (1974), and just fifteen more for it to reach 6 billion. As of 2020, the global population has reached 7.8 billion, and the growth trend is expected to continue for some time.

This growth has paralleled another trend, the rapid development of new ideas in science and technology. Between 1650 and 2020, humanity has experienced multiple technological revolutions, in what is a comparatively very short space of time….(More)”.

Shining light into the dark spaces of chat apps


Sharon Moshavi at Columbia Journalism Review: “News has migrated from print to the web to social platforms to mobile. Now, at the dawn of a new decade, it is heading to a place that presents a whole new set of challenges: the private, hidden spaces of instant messaging apps.  

WhatsApp, Facebook Messenger, Telegram, and their ilk are platforms that journalists cannot ignore — even in the US, where chat-app usage is low. “I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Mark Zuckerberg, Facebook’s CEO, wrote in March 2019. By 2022, three billion people will be using them on a regular basis, according to Statista

But fewer journalists worldwide are using these platforms to disseminate news than they were two years ago, as ICFJ discovered in its 2019 “State of Technology in Global Newsrooms” survey. That’s a particularly dangerous trend during an election year, because messaging apps are potential minefields of misinformation. 

American journalists should take stock of recent elections in India and Brazil, ahead of which misinformation flooded WhatsApp. ICFJ’s “TruthBuzz” projects found coordinated and widespread disinformation efforts using text, videos, and photos on that platform.  

It is particularly troubling given that more people now use it as a primary source for information. In Brazil, one in four internet users consult WhatsApp weekly as a news source. A recent report from New York University’s Center for Business and Human Rights warned that WhatsApp “could become a troubling source of false content in the US, as it has been during elections in Brazil and India.” It’s imperative that news media figure out how to map the contours of these opaque, unruly spaces, and deliver fact-based news to those who congregate there….(More)”.

You Are Now Remotely Controlled


Essay by Shoshana Zuboff in The New York Times: “…Only repeated crises have taught us that these platforms are not bulletin boards but hyper-velocity global bloodstreams into which anyone may introduce a dangerous virus without a vaccine. This is how Facebook’s chief executive, Mark Zuckerberg, could legally refuse to remove a faked video of Speaker of the House Nancy Pelosi and later double down on this decision, announcing that political advertising would not be subject to fact-checking.

All of these delusions rest on the most treacherous hallucination of them all: the belief that privacy is private. We have imagined that we can choose our degree of privacy with an individual calculation in which a bit of personal information is traded for valued services — a reasonable quid pro quo.For example, when Delta Air Lines piloted a biometric data system at the Atlanta airport, the company reported that of nearly 25,000 customers who traveled there each week, 98 percent opted into the process, noting that “the facial recognition option is saving an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft.”

In fact the rapid development of facial recognition systems reveals the public consequences of this supposedly private choice. Surveillance capitalists have demanded the right to take our faces wherever they appear — on a city street or a Facebook page. The Financial Times reported that a Microsoft facial recognition training database of 10 million images plucked from the internet without anyone’s knowledge and supposedly limited to academic research was employed by companies like IBM and state agencies that included the United States and Chinese military. Among these were two Chinese suppliers of equipment to officials in Xinjiang, where members of the Uighur community live in open-air prisons under perpetual surveillance by facial recognition systems.

Privacy is not private, because the effectiveness of these and other private or public surveillance and control systems depends upon the pieces of ourselves that we give up — or that are secretly stolen from us.

Our digital century was to have been democracy’s Golden Age. Instead, we enter its third decade marked by a stark new form of social inequality best understood as “epistemic inequality.” It recalls a pre-Gutenberg era of extreme asymmetries of knowledge and the power that accrues to such knowledge, as the tech giants seize control of information and learning itself. The delusion of “privacy as private” was crafted to breed and feed this unanticipated social divide. Surveillance capitalists exploit the widening inequity of knowledge for the sake of profits. They manipulate the economy, our society and even our lives with impunity, endangering not just individual privacy but democracy itself. Distracted by our delusions, we failed to notice this bloodless coup from above….(More)”.