Collective data rights can stop big tech from obliterating privacy


Article by Martin Tisne: “…There are two parallel approaches that should be pursued to protect the public.

One is better use of class or group actions, otherwise known as collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament passed a measure that requires all 27 EU member states to implement measures allowing for collective redress actions across the region. Compared with the US, the EU has stronger laws protecting consumer data and promoting competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists to force big tech companies to change their behavior even in cases where the per-person damages would be very low.

Class action lawsuits have most often been used in the US to seek financial damages, but they can also be used to force changes in policy and practice. They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it, “According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.” In the EU, however, it’s the other way around: private lawsuits can open up the possibility of regulatory action, which is constrained by the gap between EU-wide laws and national regulators.

Which brings us to the second approach: a little-known 2016 French law called the Digital Republic Bill. The Digital Republic Bill is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public-sector algorithmic systems. But it provides a sketch for what future laws could look like. It says that the source code behind such systems must be made available to the public. Anyone can request that code.

Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated…(More)”

The Coronavirus Pandemic Creative Responses Archive


National Academies of Science: “Creativity often flourishes in stressful times because innovation evolves out of need. During the coronavirus pandemic, we are witnessing a range of creative responses from individuals, communities, organizations, and industries. Some are intensely personal, others expansively global—mirroring the many ways the pandemic has affected us. What do these responses to the pandemic tell us about our society, our level of resilience, and how we might imagine the future? Explore the Coronavirus Pandemic Creative Responses Archive…

Building and Sustaining State Data Integration Efforts: Legislation, Funding, and Strategies


Policy Report by AISP: “The economic and social impacts of the COVID-19 pandemic have heightened demand for cross-agency data capacity, as policymakers are forced to reconcile the need for expanded services with extreme fiscal constraints. In this context, integrated data systems (IDS) – also commonly referred to as data hubs, data collaboratives, or state longitudinal data systems – are a valuable resource for data-informed decision making across agencies. IDS utilize standard governance processes and legal agreements to grant authority for routine, responsible use of linked data, and institutionalize roles across partners with shared priorities.

Despite these benefits, creating and sustaining IDS remains a challenge for many states. Legislation and executive action can be powerful mechanisms to overcome this challenge and promote the use of cross-agency data for public good. Legislative and/or executive actions on data sharing can:
– Require data sharing to address a specific state policy priority
– Mandate oversight and planning activities to promote a state data sharing strategy
– Grant authority to a particular office or agency to lead cross-agency data sharing

This brief is organized in three parts. First, we offer examples of these three approaches from states that have used legislation and/or executive orders to enable data integration, as well as key considerations related to each. Second, we discuss state and federal funding opportunities that can help in implementing legislative or executive actions on data sharing and enhancing long-term sustainability of data sharing efforts. Third, we offer five foundational strategies to ensure that legislative or executive action is both ethical and effective…(More)”.

We Need to Reimagine the Modern Think Tank


Article by Emma Vadehra: “We are in the midst of a great realignment in policymaking. After an era-defining pandemic, which itself served as backdrop to a generations-in-the-making reckoning on racial injustice, the era of policy incrementalism is giving way to broad, grassroots demands for structural change. But elected officials are not the only ones who need to evolve. As the broader policy ecosystem adjusts to a post-2020 world, think tanks that aim to provide the intellectual backbone to policy movements—through research, data analysis, and evidence-based recommendation—need to change their approach as well.

Think tanks may be slower to adapt because of long-standing biases around what qualifies someone to be a policy “expert.” Traditionally, think tanks assess qualifications based on educational attainment and advanced degrees, which has often meant prioritizing academic credentials over lived or professional experience on the ground. These hiring preferences alone leave many people out of the debates that shape their lives: if think tanks expect a master’s degree for mid-level and senior research and policy positions, their pool of candidates will be limited to the 4 percent of Latinos and 7 percent of Black people with those degrees (lower than the rates among white people (10.5 percent) or Asian/Pacific Islanders (17 percent)). And in specific fields like Economics, from which many think tanks draw their experts, just 0.5 percent of doctoral degrees go to Black women each year.

Think tanks alone cannot change the larger cultural and societal forces that have historically limited access to certain fields. But they can change their own practices: namely, they can change how they assess expertise and who they recruit and cultivate as policy experts. In doing so, they can push the broader policy sector—including government and philanthropic donors—to do the same. Because while the next generation marches in the streets and runs for office, the public policy sector is not doing enough to diversify and support who develops, researches, enacts, and implements policy. And excluding impacted communities from the decision-making table makes our democracy less inclusive, responsive, and effective.

Two years ago, my colleagues and I at The Century Foundation, a 100-year-old think tank that has weathered many paradigm shifts in policymaking, launched an organization, Next100, to experiment with a new model for think tanks. Our mission was simple: policy by those with the most at stake, for those with the most at stake. We believed that proximity to the communities that policy looks to serve will make policy stronger, and we put muscle and resources behind the theory that those with lived experience are as much policy experts as anyone with a PhD from an Ivy League university. The pandemic and heightened calls for racial justice in the last year have only strengthened our belief in the need to thoughtfully democratize policy development. While it’s common understanding now that COVID-19 has surfaced and exacerbated profound historical inequities, not enough has been done to question why those inequities exist, or why they run so deep. How we make policy—and who makes it—is a big reason why….(More)”

What Robots Can — And Can’t — Do For the Old and Lonely


Katie Engelhart at The New Yorker: “…In 2017, the Surgeon General, Vivek Murthy, declared loneliness an “epidemic” among Americans of all ages. This warning was partly inspired by new medical research that has revealed the damage that social isolation and loneliness can inflict on a body. The two conditions are often linked, but they are not the same: isolation is an objective state (not having much contact with the world); loneliness is a subjective one (feeling that the contact you have is not enough). Both are thought to prompt a heightened inflammatory response, which can increase a person’s risk for a vast range of pathologies, including dementia, depression, high blood pressure, and stroke. Older people are more susceptible to loneliness; forty-three per cent of Americans over sixty identify as lonely. Their individual suffering is often described by medical researchers as especially perilous, and their collective suffering is seen as an especially awful societal failing….

So what’s a well-meaning social worker to do? In 2018, New York State’s Office for the Aging launched a pilot project, distributing Joy for All robots to sixty state residents and then tracking them over time. Researchers used a six-point loneliness scale, which asks respondents to agree or disagree with statements like “I experience a general sense of emptiness.” They concluded that seventy per cent of participants felt less lonely after one year. The pets were not as sophisticated as other social robots being designed for the so-called silver market or loneliness economy, but they were cheaper, at about a hundred dollars apiece.

In April, 2020, a few weeks after New York aging departments shut down their adult day programs and communal dining sites, the state placed a bulk order for more than a thousand robot cats and dogs. The pets went quickly, and caseworkers started asking for more: “Can I get five cats?” A few clients with cognitive impairments were disoriented by the machines. One called her local department, distraught, to say that her kitty wasn’t eating. But, more commonly, people liked the pets so much that the batteries ran out. Caseworkers joked that their clients had loved them to death….(More)”.

How a largely untested AI algorithm crept into hundreds of hospitals


Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.

In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.

The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.

Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.

Deepfake Maps Could Really Mess With Your Sense of the World


Will Knight at Wired: “Satellite images showing the expansion of large detention camps in Xinjiang, China, between 2016 and 2018 provided some of the strongest evidence of a government crackdown on more than a million Muslims, triggering international condemnation and sanctions.

Other aerial images—of nuclear installations in Iran and missile sites in North Korea, for example—have had a similar impact on world events. Now, image-manipulation tools made possible by artificial intelligence may make it harder to accept such images at face value.

In a paper published online last month, University of Washington professor Bo Zhao employed AI techniques similar to those used to create so-called deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all sorts of image trickery. It trains an artificial neural network to recognize the key characteristics of certain images, such as a style of painting or the features on a particular type of map. Another algorithm then helps refine the performance of the first by trying to detect when an image has been manipulated….(More)”.

Quantitative Description of Digital Media


Introduction by Kevin Munger, Andrew M. Guess and Eszter Hargittai: “We introduce the rationale for a new peer-reviewed scholarly journal, the Journal of Quantitative Description: Digital Media. The journal is intended to create a new venue for research on digital media and address several deficiencies in the current social science publishing landscape. First, descriptive research is undersupplied and undervalued. Second, research questions too often only reflect dominant theories and received wisdom. Third, journals are constrained by unnecessary boundaries defined by discipline, geography, and length. Fourth, peer review is inefficient and unnecessarily burdensome for both referees and authors. We outline the journal’s scope and structure, which is open access, fee-free and relies on a Letter of Inquiry (LOI) model. Quantitative description can appeal to social scientists of all stripes and is a crucial methodology for understanding the continuing evolution of digital media and its relationship to important questions of interest to social scientists….(More)”.

Creating Public Value using the AI-Driven Internet of Things


Report by Gwanhoo Lee: “Government agencies seek to deliver quality services in increasingly dynamic and complex environments. However, outdated infrastructures—and a shortage of systems that collect and use massive real-time data—make it challenging for the agencies to fulfill their missions. Governments have a tremendous opportunity to transform public services using the “Internet of Things” (IoT) to provide situationspecific and real-time data, which can improve decision-making and optimize operational effectiveness.

In this report, Professor Lee describes IoT as a network of physical “things” equipped with sensors and devices that enable data transmission and operational control with no or little human intervention. Organizations have recently begun to embrace artificial intelligence (AI) and machine learning (ML) technologies to drive even greater value from IoT applications. AI/ML enhances the data analytics capabilities of IoT by enabling accurate predictions and optimal decisions in new ways. Professor Lee calls this AI/ML-powered IoT the “AI-Driven Internet of Things” (AIoT for short hereafter). AIoT is a natural evolution of IoT as computing, networking, and AI/ML technologies are increasingly converging, enabling organizations to develop as “cognitive enterprises” that capitalize on the synergy across these emerging technologies.

Strategic application of IoT in government is in an early phase. Few U.S. federal agencies have explicitly incorporated IoT in their strategic plan, or connected the potential of AI to their evolving IoT activities. The diversity and scale of public services combined with various needs and demands from citizens provide an opportunity to deliver value from implementing AI-driven IoT applications.

Still, IoT is already making the delivery of some public services smarter and more efficient, including public parking, water management, public facility management, safety alerts for the elderly, traffic control, and air quality monitoring. For example, the City of Chicago has deployed a citywide network of air quality sensors mounted on lampposts. These sensors track the presence of several air pollutants, helping the city develop environmental responses that improve the quality of life at a community level. As the cost of sensors decreases while computing power and machine learning capabilities grow, IoT will become more feasible and pervasive across the public sector—with some estimates of a market approaching $5 trillion in the next few years.

Professor Lee’s research aims to develop a framework of alternative models for creating public value with AIoT, validating the framework with five use cases in the public domain. Specifically, this research identifies three essential building blocks to AIoT: sensing through IoT devices, controlling through the systems that support these devices, and analytics capabilities that leverage AI to understand and act on the information accessed across these applications. By combining the building blocks in different ways, the report identifies four models for creating public value:

  • Model 1 utilizes only sensing capability.
  • Model 2 uses sensing capability and controlling capability.
  • Model 3 leverages sensing capability and analytics capability.
  • Model 4 combines all three capabilities.

The analysis of five AIoT use cases in the public transport sector from Germany, Singapore, the U.K., and the United States identifies 10 critical success factors, such as creating public value, using public-private partnerships, engaging with the global technology ecosystem, implementing incrementally, quantifying the outcome, and using strong cybersecurity measures….(More)”.

The Switch: How the Telegraph, Telephone, and Radio Created the Computer


Book by Chris McDonald: “Digital technology has transformed our world almost beyond recognition over the past four decades. We spend our lives surrounded by laptops, phones, tablets, and video game consoles — not to mention the digital processors that are jam-packed into our appliances and automobiles. We use computers to work, to play, to learn, and to socialize. The Switch tells the story of the humble components that made all of this possible — the transistor and its antecedents, the relay, and the vacuum tube.

All three of these devices were originally developed without any thought for their application to computers or computing. Instead, they were created for communication, in order to amplify or control signals sent over a wire or over the air. By repurposing these amplifiers as simple switches, flipped on and off by the presence or absence of an electric signal, later scientists and engineers constructed our digital universe. Yet none of it would have been possible without the telegraph, telephone, and radio. In these pages you’ll find a story of the interplay between science and technology, and the surprising ways in which inventions created for one purpose can be adapted to another. The tale is enlivened by the colorful cast of scientists and innovators, from Luigi Galvani to William Shockley, who, whether through brilliant insight or sheer obstinate determination, contributed to the evolution of the digital switch….(More)”.