How local governments are scaring tech companies


Ben Brody at Protocol: “Congress has failed to regulate tech, so states and cities are stepping in with their own approaches to food delivery apps, AI regulation and, yes, privacy. Tech doesn’t like what it sees….

Andrew Rigie said it isn’t worth waiting around for tech regulation in Washington.

“New York City is a restaurant capital of the world,” Rigie told Protocol. “We need to lead on these issues.”

Rigie, executive director of the New York City Hospitality Alliance, has pushed for New York City’s new laws on food delivery apps such as Uber Eats. His group supported measures to make permanent a cap on the service fees the apps charge to restaurants, ban the apps from listing eateries without permission and share customer information with restaurants that ask for it.

While Rigie’s official purview is dining in the Big Apple, his belief that the local government should lead on regulating tech companies in a way Washington hasn’t has become increasingly common.

“It wouldn’t be a surprise if lawmakers elsewhere seek to implement similar policies,” Rigie said. “Some of it could potentially come from the federal government, but New York City can’t wait for the federal government to maybe act.”

New York is not the only city to take action. While the Federal Trade Commission has faced calls to regulate third-party food delivery apps at a national level, San Francisco was first to pass a permanent fee cap for them in June.

Food apps are just a microcosm highlighting the patchworks of local-level regulation that are developing, or are already a fact of life, for tech. These regulatory patchworks occur when state and local governments move ahead of Congress to pass their own, often divergent, laws and rules. So far, states and municipalities are racing ahead of the feds on issues such as cybersecurity, municipal broadbandcontent moderationgig work, the use of facial recognition, digital taxes, mobile app store fees and consumer rights to repair their own devices, among others.

Many in tech became familiar with the idea when the California Consumer Privacy Act passed in 2018, making it clear more states would follow suit, although the possibility has popped up throughout modern tech policy history on issues such as privacy requirements on ISPs, net neutrality and even cybersecurity breach notification.

Many patchworks reflect the stance of advocates, consumers and legislators that Washington has simply failed to do its job on tech. The resulting uncompromising or inconsistent approaches by local governments also has tech companies worried enough to push Congress to overrule states and establish one uniform U.S. standard.

“With a bit of a vacuum at the federal level, states are looking to step in, whether that’s on content moderation, whether that’s on speech on platforms, antitrust and anticompetitive conduct regulation, data privacy,” said April Doss, executive director of Georgetown University’s Institute for Technology Law and Policy. “It is the whole bundle of issues.”…(More)

The Future of Digital Surveillance


Book by Yong Jin Park: “Are humans hard-wired to make good decisions about managing their privacy in an increasingly public world? Or are we helpless victims of surveillance through our use of invasive digital media? Exploring the chasm between the tyranny of surveillance and the ideal of privacy, this book traces the origins of personal data collection in digital technologies including artificial intelligence (AI) embedded in social network sites, search engines, mobile apps, the web, and email. The Future of Digital Surveillance argues against a technologically deterministic view—digital technologies by nature do not cause surveillance. Instead, the shaping of surveillance technologies is embedded in a complex set of individual psychology, institutional behaviors, and policy principles….(More)”

Mathematicians are deploying algorithms to stop gerrymandering


Article by Siobhan Roberts: “The maps for US congressional and state legislative races often resemble electoral bestiaries, with bizarrely shaped districts emerging from wonky hybrids of counties, precincts, and census blocks.

It’s the drawing of these maps, more than anything—more than voter suppression laws, more than voter fraud—that determines how votes translate into who gets elected. “You can take the same set of votes, with different district maps, and get very different outcomes,” says Jonathan Mattingly, a mathematician at Duke University in the purple state of North Carolina. “The question is, if the choice of maps is so important to how we interpret these votes, which map should we choose, and how should we decide if someone has done a good job in choosing that map?”

Over recent months, Mattingly and like-minded mathematicians have been busy in anticipation of a data release expected today, August 12, from the US Census Bureau. Every decade, new census data launches the decennial redistricting cycle—state legislators (or sometimes appointed commissions) draw new maps, moving district lines to account for demographic shifts.

In preparation, mathematicians are sharpening new algorithms—open-source tools, developed over recent years—that detect and counter gerrymandering, the egregious practice giving rise to those bestiaries, whereby politicians rig the maps and skew the results to favor one political party over another. Republicans have openly declared that with this redistricting cycle they intend to gerrymander a path to retaking the US House of Representatives in 2022….(More)”.

Privacy Tradeoffs: Who Should Make Them, and How?


Paper by Jane R. Bambauer: “Privacy debates are contentious in part because we have not reached a broadly recognized cultural consensus about whether interests in privacy are like most other interests that can be traded off in utilitarian, cost-benefit terms, or if instead privacy is different—fundamental to conceptions of dignity and personal liberty. Thus, at the heart of privacy debates is an unresolved question: is privacy just another interest that can and should be bartered, mined, and used in the economy, or is it different?

This question identifies and isolates a wedge between those who hold essentially utilitarian views of ethics (and who would see many data practices as acceptable) and those who hold views of natural and fundamental rights (for whom common data mining practices are either never acceptable or, at the very least, never acceptable without significant participation and consent of the subject).

This essay provides an intervention of a purely descriptive sort. First, I lay out several candidates for ethical guidelines that might legitimately undergird privacy law and policy. Only one of the ethical models (the natural right to sanctuary) can track the full scope and implications of fundamental rights-based privacy laws like the GDPR.

Second, the project contributes to the field of descriptive ethics by using a vignette experiment to discover which of the various ethical models people actually do seem to hold and abide by. The vignette study uses a factorial design to help isolate the roles of various factors that may contribute to the respondents’ gauge of what an ethical firm should or should not do in the context of personal data use as well as two other non-privacy-related contexts. The results can shed light on whether privacy-related ethics are different and distinct from business ethics more generally. They also illuminate which version(s) of “good” and “bad” share broad support and deserve to be reflected in privacy law or business practice.

The results of the vignette experiment show that on balance, Americans subscribe to some form of utilitarianism, although a substantial minority subscribe to a natural right to sanctuary approach. Thus, consent and prohibitions of data practices are appropriate where the likely risks to some groups (most importantly, data subjects, but also firms and third parties) outweigh the benefits….(More)”

The Myth of the Laboratories of Democracy


Paper by Charles Tyler and Heather Gerken: “A classic constitutional parable teaches that our federal system of government allows the American states to function as “laboratories of democracy.” This tale has been passed down from generation to generation, often to justify constitutional protections for state autonomy from the federal government. But scholars have failed to explain how state governments manage to overcome numerous impediments to experimentation, including re-source scarcity, free-rider problems, and misaligned incentives.

This Article maintains that the laboratories account is missing a proper appreciation for the coordinated networks of third-party organizations (such as interest groups, activists, and funders) that often fuel policy innovation. These groups are the real laboratories of democracy today, as they perform the lion’s share of tasks necessary to enact new policies; they create incentives that motivate elected officials to support their preferred policies; and they mobilize the power of the federal government to change the land-scape against which state experimentation occurs. If our federal system of government seeks to encourage policy experimentation, this insight has several implications for legal doctrine. At a high level of generality, courts should endeavor to create ground rules for regulating competition between political networks, rather than continuing futile efforts to protect state autonomy. The Article concludes by sketching the outlines of this approach in several areas of legal doctrine, including federal preemption of state law, conditional spending, and the anti-commandeering principle….(More)”

Philanthropy Can Help Communities Weed Out Inequity in Automated Decision Making Tools


Article by Chris Kingsley and Stephen Plank: “Two very different stories illustrate the impact of sophisticated decision-making tools on individuals and communities. In one, the Los Angeles Police Department publicly abandoned a program that used data to target violent offenders after residents in some neighborhoods were stopped by police as many as 30 times per week. In the other, New York City deployed data to root out landlords who discriminated against tenants using housing vouchers.

The second story shows the potential of automated data tools to promote social good — even as the first illustrates their potential for great harm.

Tools like these — typically described broadly as artificial intelligence or somewhat more narrowly as predictive analytics, which incorporates more human decision making in the data collection process — increasingly influence and automate decisions that affect people’s lives. This includes which families are investigated by child protective services, where police deploy, whether loan officers extend credit, and which job applications a hiring manager receives.

How these tools are built, used, and governed will help shape the opportunities of everyday citizens, for good or ill.

Civil-rights advocates are right to worry about the harm such technology can do by hardpwiring bias into decision making. At the Annie E. Casey Foundation, where we fund and support data-focused efforts, we consulted with civil-rights groups, data scientists, government leaders, and family advocates to learn more about what needs to be done to weed out bias and inequities in automated decision-making tools — and recently produced a report about how to harness their potential to promote equity and social good.

Foundations and nonprofit organizations can play vital roles in ensuring equitable use of A.I. and other data technology. Here are four areas in which philanthropy can make a difference:

Support the development and use of transparent data tools. The public has a right to know how A.I. is being used to influence policy decisions, including whether those tools were independently validated and who is responsible for addressing concerns about how they work. Grant makers should avoid supporting private algorithms whose design and performance are shielded by trade-secrecy claims. Despite calls from advocates, some companies have declined to disclose details that would allow the public to assess their fairness….(More)”

The controversy over the term ‘citizen science’


CBC News: “The term citizen science has been around for decades. Its original definition, coined in the 1990s, refers to institution-guided projects that invite the public to contribute to scientific knowledge in all kinds of ways, from the cataloguing of plants, animals and insects in people’s backyards to watching space.

Anyone is invited to participate in citizen science, regardless of whether they have an academic background in the sciences, and every year these projects number in the thousands. 

Recently, however, some large institutions, scientists and community members have proposed replacing the term citizen science with “community science.” 

Those in favour of the terminology change — such as eBird, one of the world’s largest biodiversity databases — say they want to avoid using the word citizen. They do so because they want to be “welcoming to any birder or person who wants to learn more about bird watching, regardless of their citizen status,” said Lynn Fuller, an eBird spokesperson, in a news release earlier this year. 

Some argue that while the intention is valid, the term community science already holds another definition — namely projects that gather different groups of people around environmental justice focused on social action. 

To add to the confusion, renaming citizen science could impact policies and legislation that have been established in countries such as the U.S. and Canada to support projects and efforts in favour of citizen science. 

For example, if we suddenly decided to call all species of birds “waterbirds,” then the specific meaning of this category of bird species that lives on or around water would eventually be lost. This would, in turn, make communication between people and the various fields of science incredibly difficult. 

A paper published in Science magazine last month pointed out some of the reasons why rebranding citizen science in the name of inclusion could backfire. 

Caren Cooper, a professor of forestry and environmental resources at North Carolina State University and one of the authors of the paper, said that the term citizen science didn’t originally mean to imply that people should have a certain citizenship status to participate in such projects. 

Rather, citizen science is meant to convey the idea of responsibilities and rights to access science. 

She said there are other terms being used to describe this meaning, including “public science, participatory science [and] civic science.”

Chris Hawn, a professor of geography and environmental systems at the University of Maryland Baltimore County and one of Cooper’s co-authors, said that being aware of the need for change is a good first step, but any decision to rename should be made carefully….(More)”.

Whose Streets? Our Streets!


Report by Rebecca Williams: “The extent to which “smart city” technology is altering our sense of freedom in public spaces deserves more attention if we want a democratic future. Democracy–the rule of the people–constitutes our collective self-determination and protects us against domination and abuse. Democracy requires safe spaces, or commons, for people to organically and spontaneously convene regardless of their background or position to campaign for their causes, discuss politics, and protest. In these commons, where anyone can take a stand and be noticed is where a notion of collective good can be developed and communicated. Public spaces, like our streets, parks, and squares, have historically played a significant role in the development of democracy. We should fight to preserve the freedoms intrinsic to our public spaces because they make democracy possible.

Last summer, approximately 15 to 26 million people participated in Black Lives Matter protests after the murder of George Floyd making it the largest mass movement in U.S. history. In June, the San Diego Police Department obtained footage of Black Lives Matter protesters from “smart streetlight” cameras, sparking shock and outrage from San Diego community members. These “smart streetlights” were promoted as part of citywide efforts to become a “smart city” to help with traffic control and air quality monitoring. Despite discoverable documentation about the streetlight’s capabilities and data policies on their website, including a data-sharing agreement about how they would share data with the police, the community had no expectation that the streetlights would be surveilling protestors. After media coverage and ongoing advocacy from the Transparent and Responsible Use of Surveillance Technology San Diego (TRUSTSD) coalition, the City Council, set aside the funding for the streetlights4 until a surveillance technology ordinance was considered and the Mayor ordered the 3,000+ streetlight cameras off. Due to the way power was supplied to the cameras, they remained on, but the city reported it no longer had access to the data it collected. In November, the City Council voted unanimously in favor of a surveillance ordinance and to establish a Privacy Advisory Board.In May, it was revealed that the San Diego Police Department had previously (in 2017) held back materials to Congress’ House Committee on Oversight and Reform about their use facial recognition technology. This story, with its mission creep and mishaps, is representative of a broader set of “smart city” cautionary trends that took place in the last year. These cautionary trends call us to question if our public spaces become places where one fears punishment, how will that affect collective action and political movements?

This report is an urgent warning of where we are headed if we maintain our current trajectory of augmenting our public space with trackers of all kinds. In this report, I outline how current “smart city” technologies can watch you. I argue that all “smart city” technology trends toward corporate and state surveillance and that if we don’t stop and blunt these trends now that totalitarianism, panopticonism, discrimination, privatization, and solutionism will challenge our democratic possibilities. This report examines these harms through cautionary trends supported by examples from this last year and provides 10 calls to action for advocates, legislatures, and technology companies to prevent these harms. If we act now, we can ensure the technology in our public spaces protect and promote democracy and that we do not continue down this path of an elite few tracking the many….(More)”

Off-Label: How tech platforms decide what counts as journalism


Essay by Emily Bell: “…But putting a stop to militarized fascist movements—and preventing another attack on a government building—will ultimately require more than content removal. Technology companies need to fundamentally recalibrate how they categorize, promote, and circulate everything under their banner, particularly news. They have to acknowledge their editorial responsibility.

The extraordinary power of tech platforms to decide what material is worth seeing—under the loosest possible definition of who counts as a “journalist”—has always been a source of tension with news publishers. These companies have now been put in the position of being held accountable for developing an information ecosystem based in fact. It’s unclear how much they are prepared to do, if they will ever really invest in pro-truth mechanisms on a global scale. But it is clear that, after the Capitol riot, there’s no going back to the way things used to be.

Between 2016 and 2020, Facebook, Twitter, and Google made dozens of announcements promising to increase the exposure of high-quality news and get rid of harmful misinformation. They claimed to be investing in content moderation and fact-checking; they assured us that they were creating helpful products like the Facebook News Tab. Yet the result of all these changes has been hard to examine, since the data is both scarce and incomplete. Gordon Crovitz—a former publisher of the Wall Street Journal and a cofounder of NewsGuard, which applies ratings to news sources based on their credibility—has been frustrated by the lack of transparency: “In Google, YouTube, Facebook, and Twitter we have institutions that we know all give quality ratings to news sources in different ways,” he told me. “But if you are a news organization and you want to know how you are rated, you can ask them how these systems are constructed, and they won’t tell you.” Consider the mystery behind blue-check certification on Twitter, or the absurdly wide scope of the “Media/News” category on Facebook. “The issue comes down to a fundamental failure to understand the core concepts of journalism,” Crovitz said.

Still, researchers have managed to put together a general picture of how technology companies handle various news sources. According to Jennifer Grygiel, an assistant professor of communications at Syracuse University, “we know that there is a taxonomy within these companies, because we have seen them dial up and dial down the exposure of quality news outlets.” Internally, platforms rank journalists and outlets and make certain designations, which are then used to develop algorithms for personalized news recommendations and news products….(More)”

Power to the Public: The Promise of Public Interest Technology


Book by Tara Dawson McGuinness and Hana Schank: “As the speed and complexity of the world increases, governments and nonprofit organizations need new ways to effectively tackle the critical challenges of our time—from pandemics and global warming to social media warfare. In Power to the Public, Tara Dawson McGuinness and Hana Schank describe a revolutionary new approach—public interest technology—that has the potential to transform the way governments and nonprofits around the world solve problems. Through inspiring stories about successful projects ranging from a texting service for teenagers in crisis to a streamlined foster care system, the authors show how public interest technology can make the delivery of services to the public more effective and efficient.

At its heart, public interest technology means putting users at the center of the policymaking process, using data and metrics in a smart way, and running small experiments and pilot programs before scaling up. And while this approach may well involve the innovative use of digital technology, technology alone is no panacea—and some of the best solutions may even be decidedly low-tech.

Clear-eyed yet profoundly optimistic, Power to the Public presents a powerful blueprint for how government and nonprofits can help solve society’s most serious problems….(More)”