Control Creep: When the Data Always Travels, So Do the Harms


Essay by Sun-ha Hong: “In 2014, a Canadian firm made history. Calgary-based McLeod Law brought the first known case in which Fitbit data would be used to support a legal claim. The device’s loyalty was clear: the young woman’s personal injury claim would be supported by her own Fitbit data, which would help prove that her activity levels had dipped post-injury. Yet the case had opened up a wider horizon for data use, both for and against the owners of such devices. Leading artificial intelligence (AI) researcher Kate Crawford noted at the time that the machines we use for “self-tracking” may be opening up a “new age of quantified self incrimination.”

Subsequent cases have demonstrated some of those possibilities. In 2015, a Connecticut man reported that his wife had been murdered by a masked intruder. Based partly on the victim’s Fitbit data, and other devices such as the family house alarm, detectives charged the man — not a masked intruder — with the crime. “In 2016, a Pennsylvania woman claimed she was sexually assaulted, but police argued that the woman’s own Fitbit data suggested otherwise, and charged her with false reporting.” In the courts and elsewhere, data initially gathered for self-tracking is increasingly being used to contradict or overrule the self — despite academic research and even a class action lawsuit alleging high rates of error in Fitbit data.

The data always travels, creating new possibilities for judging and predicting human lives. We might call it control creep: data-driven technologies tend to be pitched for a particular context and purpose, but quickly expand into new forms of control. Although we often think about data use in terms of trade-offs or bargains, such frameworks can be deeply misleading. What does it mean to “trade” personal data for the convenience of, say, an Amazon Echo, when the other side of that trade is constantly arranging new ways to sell and use that data in ways we cannot anticipate? As technology scholars Jake Goldenfein, Ben Green and Salomé Viljoen argue, the familiar trade-off of “privacy vs. X” rarely results in full respect for both values but instead tends to normalize a further stripping of privacy….(More)”.

Data Brokers Are a Threat to Democracy


Justin Sherman at Wired: “Enter the data brokerage industry, the multibillion dollar economy of selling consumers’ and citizens’ intimate details. Much of the privacy discourse has rightly pointed fingers at Facebook, Twitter, YouTube, and TikTok, which collect users’ information directly. But a far broader ecosystem of buying up, licensing, selling, and sharing data exists around those platforms. Data brokerage firms are middlemen of surveillance capitalism—purchasing, aggregating, and repackaging data from a variety of other companies, all with the aim of selling or further distributing it.

Data brokerage is a threat to democracy. Without robust national privacy safeguards, entire databases of citizen information are ready for purchase, whether to predatory loan companies, law enforcement agencies, or even malicious foreign actors. Federal privacy bills that don’t give sufficient attention to data brokerage will therefore fail to tackle an enormous portion of the data surveillance economy, and will leave civil rights, national security, and public-private boundaries vulnerable in the process.

Large data brokers—like Acxiom, CoreLogic, and Epsilon—tout the detail of their data on millions or even billions of people. CoreLogic, for instance, advertises its real estate and property information on 99.9 percent of the US population. Acxiom promotes 11,000-plus “data attributes,” from auto loan information to travel preferences, on 2.5 billion people (all to help brands connect with people “ethically,” it adds). This level of data collection and aggregation enables remarkably specific profiling.

Need to run ads targeting poor families in rural areas? Check out one data broker’s “Rural and Barely Making It” data set. Or how about racially profiling financial vulnerability? Buy another company’s “Ethnic Second-City Strugglers” data set. These are just some of the disturbing titles captured in a 2013 Senate report on the industry’s data products, which have only expanded since. Many other brokers advertise their ability to identify subgroups upon subgroups of individuals through criteria like race, gender, marital status, and income level, all sensitive characteristics that citizens likely didn’t know would end up in a database—let alone up for sale….(More)”.

A Resurgence of Democracy in 2040?


Blog by Steven Aftergood: “The world will be “increasingly out of balance and contested at every level” over the next twenty years due to the pressures of demographic, environmental, economic and technological change, a new forecast from the National Intelligence Council called Global Trends 2040 said last week.

But among the mostly grim possible futures that can be plausibly anticipated — international chaos, political paralysis, resource depletion, mounting poverty — one optimistic scenario stands out: “In 2040, the world is in the midst of a resurgence of open democracies led by the United States and its allies.”

How could such a global renaissance of democracy possibly come about?

The report posits that between now and 2040 technological innovation in open societies will lead to economic growth, which will enable solutions to domestic problems, build public confidence, reduce vulnerabilities and establish an attractive model for emulation by others. Transparency is both a precondition and a consequence of this process.

“Open, democratic systems proved better able to foster scientific research and technological innovation, catalyzing an economic boom. Strong economic growth, in turn, enabled democracies to meet many domestic needs, address global challenges, and counter rivals,” the report assessed in this potential scenario.

“With greater resources and improving services, these democracies launched initiatives to crack down on corruption, increase transparency, and improve accountability worldwide, boosting public trust. These efforts helped to reverse years of social fragmentation and to restore a sense of civic nationalism.”

“The combination of rapid innovation, a stronger economy, and greater societal cohesion enabled steady progress on climate and other challenges. Democratic societies became more resilient to disinformation because of greater public awareness and education initiatives and new technologies that quickly identify and debunk erroneous information. This environment restored a culture of vigorous but civil debate over values, goals, and policies.”

“Strong differences in public preferences and beliefs remained but these were worked out democratically.”

In this hopeful future, openness provided practical advantages that left closed authoritarian societies lagging behind.

“In contrast to the culture of collaboration prevailing in open societies, Russia and China failed to cultivate the high-tech talent, investment, and environment necessary to sustain continuous innovation.”

“By the mid-2030s, the United States and its allies in Europe and Asia were the established global leaders in several technologies, including AI, robotics, the Internet of Things, biotech, energy storage, and additive manufacturing.”

The success of open societies in problem solving, along with their economic and social improvements, inspired other countries to adopt the democratic model.

“Technological success fostered a widely perceived view among emerging and developing countries that democracies were more adaptable and resilient and better able to cope with growing global challenges.”…(More)”.

The Case for Local Data Sharing Ordinances


Paper by Beatriz Botero Arcila: “Cities in the US have started to enact data-sharing rules and programs to access some of the data that technology companies operating under their jurisdiction – like short-term rental or ride hailing companies – collect. This information allows cities to adapt too to the challenges and benefits of the digital information economy. It allows them to understand what their impact is on congestion, the housing market, the local job market and even the use of public spaces. It also empowers them to act accordingly by, for example, setting vehicle caps or mandating a tailored minimum pay for gig-workers. These companies, however, sometimes argue that sharing this information attempts against their users’ privacy rights and their privacy rights, because this information is theirs; it’s part of their business records. The question is thus what those rights are, and whether it should and could be possible for local governments to access that information to advance equity and sustainability, without harming the legitimate privacy interests of both individuals and companies. This Article argues that within current Fourth Amendment doctrine and privacy law there is space for data-sharing programs. Privacy law, however, is being mobilized to alter the distribution of power and welfare between local governments, companies, and citizens within current digital information capitalism to extend those rights beyond their fair share and preempt permissible data-sharing requests. The Article warns that if the companies succeed in their challenges, privacy law will have helped shield corporate power from regulatory oversight, while still leaving individuals largely unprotected and submitting local governments further to corporate interests….(More)”.

How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals


Paper by Eric Wu et al: “Medical artificial-intelligence (AI) algorithms are being increasingly proposed for the assessment and care of patients. Although the academic community has started to develop reporting guidelines for AI clinical trials, there are no established best practices for evaluating commercially available algorithms to ensure their reliability and safety. The path to safe and robust clinical AI requires that important regulatory questions be addressed. Are medical devices able to demonstrate performance that can be generalized to the entire intended population? Are commonly faced shortcomings of AI (overfitting to training data, vulnerability to data shifts, and bias against underrepresented patient subgroups) adequately quantified and addressed?

In the USA, the US Food and Drug Administration (FDA) is responsible for approving commercially marketed medical AI devices. The FDA releases publicly available information on approved devices in the form of a summary document that generally contains information about the device description, indications for use, and performance data of the device’s evaluation study. The FDA has recently called for improvement of test-data quality, improvement of trust and transparency with users, monitoring of algorithmic performance and bias on the intended population, and testing with clinicians in the loop. To understand the extent to which these concerns are addressed in practice, we have created an annotated database of FDA-approved medical AI devices and systematically analyzed how these devices were evaluated before approval. Additionally, we have conducted a case study of pneumothorax-triage devices and found that evaluating deep-learning models at a single site alone, which is often done, can mask weaknesses in the models and lead to worse performance across sites.

Fig. 1: Breakdown of 130 FDA-approved medical AI devices by body area.

figure1

Devices are categorized by risk level (square, high risk; circle, low risk). Blue indicates that a multi-site evaluation was reported; otherwise, symbols are gray. Red outline indicates a prospective study (key, right margin). Numbers in key indicate the number of devices with each characteristic….(More)”.

Our Tomorrows- A Community Sensemaking Approach


OPSI Case Study: “The Kansas vision for the early childhood system is:  All children will have their basic needs met and have equitable access to quality early childhood care and educational opportunities, so they are prepared to succeed in kindergarten and beyond. In 2019, the State of Kansas received a large federal grant (the Preschool Development Grant) to conduct a needs assessment and craft a strategic plan for the early childhood system where all children can thrive. The grant leadership team of state agencies utilized this opportunity to harness the power of Our Tomorrows’ innovative Community Sensemaking Approach to map families’ lived experiences and create policies and programming adaptive to families’ needs.

In this context, Our Tomorrows set out to achieve three goals:

1. Gather stories about thriving and surviving from families across Kansas utilizing a complexity-informed narrative research approach called SenseMaker.

2. Make sense of patterns that emerged from the stories through Community Sensemaking Workshops with stakeholders at various levels of the system.

3. Take action and ennoble bottom-up change through Community Action Labs.

From a complexity perspective, these goals translate to developing a ‘human sensor network,’ embedding citizen feedback loops and sensemaking processes into governance, and complexity-informed intervention via portfolios of safe-to-fail probes….(More)

Combining Racial Groups in Data Analysis Can Mask Important Differences in Communities


Blog by Jonathan Schwabish and Alice Feng: “Surveys, datasets, and published research often lump together racial and ethnic groups, which can erase the experiences of certain communities. Combining groups with different experiences can mask how specific groups and communities are faring and, in turn, affect how government funds are distributed, how services are provided, and how groups are perceived.

Large surveys that collect data on race and ethnicity are used to disburse government funds and services in a number of ways. The US Department of Housing Urban Development, for instance, distributes millions of dollars annually to Native American tribes through the Indian Housing Block Grant. And statistics on race and ethnicity are used as evidence in employment discrimination lawsuits and to help determine whether banks are discriminating against people and communities of color.

Despite the potentially large effects these data can have, researchers don’t always disaggregate their analysis to more racial groups. Many point to small sample sizes as a limitation for including more race and ethnicity categories in their analysis, but efforts to gather more specific data and disaggregate available survey results are critical to creating better policy for everyone.

To illustrate how aggregating racial groups can mask important variation, we looked at the 2019 poverty rate across 139 detailed race categories in the Census Bureau’s annual American Community Survey (ACS). The ACS provides information that helps determine how more than $675 billion in government funds is distributed each year.

The official poverty rate in the United States stood at 10.5 percent in 2019, with significant variation across racial and ethnic groups. The primary question in the ACS concerning race includes 15 separate checkboxes, with space to print additional names or races for some options (a separate question refers to Hispanic or Latino origin).

Screenshot of the American Community Survey's race question

Although the survey offers ample latitude for interviewees to respond with their race, researchers have a tendency to aggregate racial categories. People who identify as Asian or Pacific Islander (API), for example, are often combined in economic analyses.

This aggregation can mask variation within racial or ethnic categories. As an example, one analysis that used the ACS showed 11 percent of children in the API group are in poverty, relative to 18 percent of the overall population. But that estimate could understate the poverty rate among children who identify as Pacific lslanders and could overstate the poverty rate among children who identify as Asian, which itself is a broad grouping that encompasses many different communities with various experiences. Similar aggregating can be found across economic literature, including on educationimmigration (PDF), and wealth….(More)”.

The Promise of Access: Technology, Inequality, and the Political Economy of Hope


Book by Daniel Greene: “Why simple technological solutions to complex social issues continue to appeal to politicians and professionals who should (and often do) know better.

Why do we keep trying to solve poverty with technology? What makes us feel that we need to learn to code—or else? In The Promise of AccessDaniel Greene argues that the problem of poverty became a problem of technology in order to manage the contradictions of a changing economy. Greene shows how the digital divide emerged as a policy problem and why simple technological solutions to complex social issues continue to appeal to politicians and professionals who should (and often do) know better.

Greene shows why it is so hard to get rid of the idea—which he terms the access doctrine—that the problem of poverty can be solved with the right tools and the right skills. This way of thinking is so ingrained that is adopted by organizations that fight poverty—which often refashion themselves to resemble technology startups. Drawing on years of fieldwork, Greene explores how this plays out in the real world, examining organizational change in technology startups, public libraries, and a charter school in Washington, DC. He finds that as the libraries and school pursue technological solutions, they win praise and funding but also marginalize and alienate the populations they serve. Greene calls for new political alliances that can change the terms on which we understand technology and fight poverty….(More)”

Our Brain Typically Overlooks This Brilliant Problem-Solving Strategy


Diana Kwon in Scientific American: “For generations, the standard way to learn how to ride a bicycle was with training wheels or a tricycle. But in recent years, many parents have opted to train their kids with balance bikes, pedalless two-wheelers that enable children to develop the coordination needed for bicycling—a skill that is not as easily acquired with an extra set of wheels.

Given the benefits of balance bikes, why did it take so long for them to replace training wheels? There are plenty of other examples in which overlooked solutions that involve subtraction turn out to be better alternatives. In some European cities, for example, urban planners have gotten rid of traffic lights and road signs to make streets safer—an idea that runs counter to conventional traffic design.

Leidy Klotz, an engineer at the University of Virginia, noticed that minimalist designs, in which elements are removed from an existing model, were uncommon. So he reached out to Gabrielle Adams, a social psychologist at the university, to try to figure out why this was the case. The two researchers hypothesized that there might be a psychological explanation: when faced with a problem, people tend to select solutions that involve adding new elements rather than taking existing components away….

These findings, which were published today in Nature, suggest that “additive solutions have sort of a privileged status—they tend to come to mind quickly and easily,” says Benjamin Converse, a social psychologist at the University of Virginia and a co-author of the study. “Subtractive solutions are not necessarily harder to consider, but they take more effort to find.”…(More)”.

Dark patterns, the tricks websites use to make you say yes, explained


Article by Sara Morrison: “If you’re an Instagram user, you may have recently seen a pop-up asking if you want the service to “use your app and website activity” to “provide a better ads experience.” At the bottom there are two boxes: In a slightly darker shade of black than the pop-up background, you can choose to “Make ads less personalized.” A bright blue box urges users to “Make ads more personalized.”

This is an example of a dark pattern: design that manipulates or heavily influences users to make certain choices. Instagram uses terms like “activity” and “personalized” instead of “tracking” and “targeting,” so the user may not realize what they’re actually giving the app permission to do. Most people don’t want Instagram and its parent company, Facebook, to know everything they do and everywhere they go. But a “better experience” sounds like a good thing, so Instagram makes the option it wants users to select more prominent and attractive than the one it hopes they’ll avoid.

There’s now a growing movement to ban dark patterns, and that may well lead to consumer protection laws and action as the Biden administration’s technology policies and initiatives take shape. California is currently tackling dark patterns in its evolving privacy laws, and Washington state’s latest privacy bill includes a provision about dark patterns.

“When you look at the way dark patterns are employed across digital engagement, generally, [the internet allows them to be] substantially exacerbated and made less visible to consumers,” Rebecca Kelly Slaughter, acting chair of the Federal Trade Commission (FTC), told Recode. “Understanding the effect of that is really important to us as we craft our strategy for the digital economy.”

Dark patterns have for years been tricking internet users into giving up their data, money, and time. But if some advocates and regulators get their way, they may not be able to do that for much longer…(More)”.