Roe draft raises concerns data could be used to identify abortion seekers, providers


Article by Chris Mills Rodrigo: “Concerns that data gathered from peoples’ interactions with their digital devices could potentially be used to identify individuals seeking or performing abortions have come into the spotlight with the news that pregnancy termination services could soon be severely restricted or banned in much of the United States.

Following the leak of a draft majority opinion indicating that the Supreme Court is poised to overturn Roe v. Wade, the landmark 1973 decision that established the federal right to abortion, privacy advocates are raising alarms about the ways law enforcement officials or anti-abortion activists could make such identifications using data available on the open market, obtained from companies or extracted from devices.

“The dangers of unfettered access to Americans’ personal information have never been more obvious. Researching birth control online, updating a period-tracking app or bringing a phone to the doctor’s office could be used to track and prosecute women across the U.S.,” Sen. Ron Wyden (D-Ore.) said in a statement to The Hill. 

Data from web searches, smartphone location pings and online purchases can all be easily obtained with little to no safeguards.

“Almost everything that you do … data can be captured about it and can be fed into a larger model that can help somebody or some entity infer whether or not you may be pregnant and whether or not you may be someone who’s planning to have an abortion or has had one,” Nathalie Maréchal, senior policy manager at Ranking Digital Rights, explained. 

There are three primary ways that data could travel from individuals’ devices to law enforcement or other groups, according to experts who spoke with The Hill.

The first is via third party data brokers, which make up a shadowy multibillion dollar industry dedicated to collecting, aggregating and selling location data harvested from individuals’ mobile phones that has provided unprecedented access to the daily movements of Americans for advertisers, or virtually anyone willing to pay…(More)”.

Data scientists are using the most annoying feature on your phones to save lives in Ukraine


Article by Bernhard Warner: “In late March, five weeks into Russia’s war on Ukraine, an international team of researchers, aid agency specialists, public health experts, and data nerds gathered on a Zoom call to discuss one of the tragic by-products of the war: the refugee crisis.

The numbers discussedweregrim. The United Nations had just declared Ukraine was facing the biggest humanitarian crisis to hit Europe since World War II as more than 4 million Ukrainians—roughly 10% of the population—had been forced to flee their homes to evade Russian President Vladimir Putin’s deadly and indiscriminate bombing campaign. That total has since swelled to 5.5 million, the UN estimates.

What the aid specialists on the call wanted to figure out was how many Ukrainian refugees still remained in the country (a population known as “internally displaced people”) and how many had crossed borders to seek asylum in the neighboring European Union countries of Poland, Slovakia, and Hungary, or south into Moldova. 

Key to an effective humanitarian response of this magnitude is getting accurate and timely data on the flow of displaced people traveling from a Point A danger zone to a Point B safe space. And nobody on the call, which was organized by CrisisReady, an A-team of policy experts and humanitarian emergency responders, had anything close to precise numbers.

But they did have a kind of secret weapon: mobility data.

“The importance of mobility data is often overstated,” Rohini Sampoornam Swaminathan, a crisis specialist at Unicef, told her colleagues on the call. Such anonymized data—pulled from social media feeds, geolocation apps like Google Maps, cell phone towers and the like—may not give the precise picture of what’s happening on the ground in a moment of extreme crisis, “but it’s valuable” as it can fill in points on a map. ”It’s important,” she added, “to get a picture for where people are moving, especially in the first days.”

Ukraine, a nation of relatively tech-savvy social media devotees and mobile phone users, is rich in mobility data, and that’s profoundly shaped the way the world sees and interprets the deadly conflict. The CrisisReady group believes the data has an even higher calling—that it can save lives.

Since the first days of Putin’s bombing campaign, various international teams have been tapping publicly available mobility data to map the refugee crisis and coordinate an effective response. They believe the data can reveal where war-torn Ukrainians are now, and even where they’re heading. In the right hands, the data can provide local authorities the intel they need to get essential aid—medical care, food, and shelter—to the right place at the right time…(More)”

To make AI fair, here’s what we must learn to do


Article by Mona Sloane: “…From New York City to California and the European Union, many artificial intelligence (AI) regulations are in the works. The intent is to promote equity, accountability and transparency, and to avoid tragedies similar to the Dutch childcare-benefits scandal.

But these won’t be enough to make AI equitable. There must be practical know-how on how to build AI so that it does not exacerbate social inequality. In my view, that means setting out clear ways for social scientists, affected communities and developers to work together.

Right now, developers who design AI work in different realms from the social scientists who can anticipate what might go wrong. As a sociologist focusing on inequality and technology, I rarely get to have a productive conversation with a technologist, or with my fellow social scientists, that moves beyond flagging problems. When I look through conference proceedings, I see the same: very few projects integrate social needs with engineering innovation.

To spur fruitful collaborations, mandates and approaches need to be designed more effectively. Here are three principles that technologists, social scientists and affected communities can apply together to yield AI applications that are less likely to warp society.

Include lived experience. Vague calls for broader participation in AI systems miss the point. Nearly everyone interacting online — using Zoom or clicking reCAPTCHA boxes — is feeding into AI training data. The goal should be to get input from the most relevant participants.

Otherwise, we risk participation-washing: superficial engagement that perpetuates inequality and exclusion. One example is the EU AI Alliance: an online forum, open to anyone, designed to provide democratic feedback to the European Commission’s appointed expert group on AI. When I joined in 2018, it was an unmoderated echo chamber of mostly men exchanging opinions, not representative of the population of the EU, the AI industry or relevant experts…(More)”

Unmet Desire


Essay by I vividly remember March 2020, the month the United States shut down as COVID-19 spread uncontrollably and upended daily life. At the time, I worked at Cornell University in upstate New York. As we adjusted to a new normal, my Cornell colleague Elizabeth Day and I suspected that local leaders were facing unprecedented policy challenges that were not making the major headlines.

We decided to reach out to county policymakers throughout upstate New York, inviting them to share challenges they were facing. We offered to discuss research that might prove helpful. Responses soon poured in.

One county executive was trying to figure out how to provide childcare for first responders. Childcare centers were ordered closed, but first responders could not stay home to watch their kids. The executive needed systematic research on other options. A second local policymaker watched as her county’s offices shuttered and work moved online; she needed research on how other local leaders had used mobile vans to provide necessary services to rural residents without internet. Another county official sought to design a high-quality survey to elicit frank responses from municipal leaders about COVID-related challenges. In this case, she needed to discuss the fundamentals of survey design and implementation with an expert.

These responses led us to engage in an informal collaboration with each of these policymakers. By informal collaboration, I mean a collaborative exchange in which people with diverse forms of knowledge, expertise, and lived experience share what they know with the goal of developing an expanded understanding of a problem—yet still remain autonomous decisionmakers. In these cases, we as researchers brought knowledge about policy analysis and survey fundamentals, and the policymakers brought detailed knowledge about their present needs, local context, and historical challenges. All this diverse information was crucial to chart a way forward that was informed by evidence.

Yet it turns out our interactions were highly unusual. During our conversations, all the policymakers revealed that researchers from colleges and universities in their immediate area had never reached out in this way, and that they had no regular communication with local researchers.

This disconnect is a problem. Local policymakers are responsible for almost $2 trillion of spending annually, and they oversee many areas in which technical knowledge is essential, such as promoting economic development, building and maintaining roads, educating children, policing, fighting fires, determining acceptable land use, and providing public transportation…(More)”.

Opening Up to Open Science


Essay by Chelle Gentemann, Christopher Erdmann and Caitlin Kroeger: “The modern Hippocratic Oath outlines ethical standards that physicians worldwide swear to uphold. “I will respect the hard-won scientific gains of those physicians in whose steps I walk,” one of its tenets reads, “and gladly share such knowledge as is mine with those who are to follow.”

But what form, exactly, should knowledge-sharing take? In the practice of modern science, knowledge in most scientific disciplines is generally shared through peer-reviewed publications at the end of a project. Although publication is both expected and incentivized—it plays a key role in career advancement, for example—many scientists do not take the extra step of sharing data, detailed methods, or code, making it more difficult for others to replicate, verify, and build on their results. Even beyond that, professional science today is full of personal and institutional incentives to hold information closely to retain a competitive advantage.

This way of sharing science has some benefits: peer review, for example, helps to ensure (even if it never guarantees) scientific integrity and prevent inadvertent misuse of data or code. But the status quo also comes with clear costs: it creates barriers (in the form of publication paywalls), slows the pace of innovation, and limits the impact of research. Fast science is increasingly necessary, and with good reason. Technology has not only improved the speed at which science is carried out, but many of the problems scientists study, from climate change to COVID-19, demand urgency. Whether modeling the behavior of wildfires or developing a vaccine, the need for scientists to work together and share knowledge has never been greater. In this environment, the rapid dissemination of knowledge is critical; closed, siloed knowledge slows progress to a degree society cannot afford. Imagine the consequences today if, as in the 2003 SARS disease outbreak, the task of sequencing genomes still took months and tools for labs to share the results openly online didn’t exist. Today’s challenges require scientists to adapt and better recognize, facilitate, and reward collaboration.

Open science is a path toward a collaborative culture that, enabled by a range of technologies, empowers the open sharing of data, information, and knowledge within the scientific community and the wider public to accelerate scientific research and understanding. Yet despite its benefits, open science has not been widely embraced…(More)”

Beyond ‘X Number Served’


Essay by Mona Mourshed: “Metrics matter, but they should always be plural. Focus on the speedometer, ignore the gas gauge, and you’re sure to stop short of your destination. But while the plague of metric monomania can occasionally be an issue in business, it’s an even bigger problem within the social sector. After all, market discipline forces business leaders to weigh tradeoffs between costs and sales, or between product quality and service level speed. Multiple metrics help executives get the balance right, even as they scale.

By contrast, nonprofits too often receive (well-intended) guidance from stakeholders like funders and board members to disproportionately zero in on a single goal: serving the maximum number of beneficiaries. That’s a perfectly understandable impulse, of course. But it confuses scale with just one impact dimension, reach. “We have to recognize that a higher number does not necessarily indicate transformation,” says Lisha McCormick, CEO of Last Mile Health, which supports countries in building strong community health systems. “Higher reach alone does not equate to impact.”

This is a problem because excessively defining and valuing programs by the number of people they serve can give rise to unintended consequences. Nonprofit leaders can find themselves discussing how to serve more people through “lighter touch” models or debating ambiguous metrics like “reached” or “touched” to expand participant numbers (while fighting uneasiness about the potential adverse implications for program quality)…(More)”.

How Tech Despair Can Set You Free


Essay by Samuel Matlack: “One way to look at the twentieth century is to say that nations may rise and fall but technical progress remains forever. Its sun rises on the evil and on the good, and its rain falls on the just and on the unjust. Its sun can be brighter than a thousand suns, scorching our enemies, but, with some time and ingenuity, it can also power air conditioners and 5G. One needs to look on the bright side, living by faith and not by sight.

The century’s inquiring minds wished to know whether this faith in progress is meaningfully different from blindness. Ranking high among those minds was the French historian, sociologist, and lay theologian Jacques Ellul, and his answer was simple: No.

In America, Ellul became best known for his book The Technological Society. The book’s signature term was “technique,” an idea he developed throughout his vast body of writing. Technique is the social structure on which modern life is built. It is the consciousness that has come to govern all human affairs, suppressing questions of ultimate human purposes and meaning. Our society no longer asks why we should do anything. All that matters anymore, Ellul argued, is how to do it — to which the canned answer is always: More efficiently! Much as a modern machine can be said to run on its own, so does the technological society. Human control of it is an illusion, which means we are on a path to self-destruction — not because the social machine will necessarily kill us (although it might), but because we are fast becoming soulless creatures.

While tech pessimists celebrated Ellul’s book as an urgent warning of impending doom, tech optimists dismissed it as alarmist exaggeration. Beneath this mixed reception lies a more difficult truth, because what on the surface looks like plain old doomsaying is in fact a highly unusual project….

But looking back on that era, optimists might think they are justified in claiming that the doomsaying was overblown. The Soviet Union fell without the bomb getting dropped. No third world war has been looming, and while the world remains a dangerous place, the good guys are still winning, thanks in large part to massively efficient economies and technological supremacy. China may have more steel, but we have more guns. (Let’s not talk about the germs.) And the digital revolution, despite collateral damage, has brought a bounty of benefits we largely take for granted. So to the optimist, Ellul’s talk some seventy years ago about how we were facing a choice between suicide and freedom sounds antiquated. He was a man of his time.

So why bother? What use can we make of Ellul’s vision? Because even if we believe that our world’s most dehumanizing technological projects — from Beijing to Silicon Valley — demand a fierce defense of human dignity, why look to Ellul when we have our own productive cottage industry of critics, ethicists, theorists, and prophets? Why put up with Ellul’s abstract style and the bizarre structure of his gigantic output — the fact that one may find in any given text only half of what he actually thought about the subject, thanks to what he called his dialectical approach?…(More)”.

Shadowbanning Is Big Tech’s Big Problem


Essay by Gabriel Nicholas: “Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users’ “For You” pages. …When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world’s primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them….

According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn’t know existed. It’s not hard to imagine what happens when social-media users believe they are victims of conspiracy…(More)”.

Governance of the Inconceivable


Essay by Lisa Margonelli: “How do scientists and policymakers work together to design governance for technologies that come with evolving and unknown risks? In the Winter 1985 Issues, seven experts reflected on the possibility of a large nuclear conflict triggering a “nuclear winter.” These experts agreed that the consequences would be horrifying: even beyond radiation effects, for example, burning cities could put enough smoke in the atmosphere to block sunlight, lowering ground temperatures and threatening people, crops, and other living things. In the same issue, former astronaut and then senator John Glenn wrote about the prospects for several nuclear nonproliferation agreements he was involved in negotiating. This broad discussion of nuclear weapons governance in Issues—involving legislators Glenn and then senator Al Gore as well as scientists, Department of Defense officials, and weapons designers—reflected the discourse of the time. In the culture at large, fears of nuclear annihilation became ubiquitous, and today you can easily find danceable playlists containing “38 Essential ’80s Songs About Nuclear Anxiety.”

But with the end of the Cold War, the breakup of the Soviet Union, and the rapid growth of a globalized economy and culture, these conversations receded from public consciousness. Issues has not run an article on nuclear weapons since 2010, when an essay argued that exaggerated fear of nuclear weapons had led to poor policy decisions. “Albert Einstein memorably proclaimed that nuclear weapons ‘have changed everything except our way of thinking,’” wrote political scientist John Mueller. “But the weapons actually seem to have changed little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money.”

All these old conversations suddenly became relevant again as our editorial team worked on this issue. On February 27, when Vladimir Putin ordered Russia’s nuclear weapons put on “high alert” after invading Ukraine, United Nations Secretary-General Antonio Guterres declared that “the mere idea of a nuclear conflict is simply unconceivable.” But, in the space of a day, what had long seemed inconceivable was suddenly being very actively conceived….(More)”.

The challenges of protecting data and rights in the metaverse


Article by Urvashi Aneja: “Virtual reality systems work by capturing extensive biological data about a user’s body, including pupil dilation, eye movement, facial expressions, skin temperature, and emotional responses to stimuli. Spending just 20 minutes in a VR simulation leaves nearly 2 million unique recordings of body language.

Existing data protection frameworks are woefully inadequate for dealing with the privacy implications of these technologies. Data collection is involuntary and continuous, rendering the notion of consent almost impossible. Research also shows that five minutes of VR data, with all personally identifiable information stripped, could be correctly identified using a machine learning algorithm with 95% accuracy. This type of data isn’t covered by most biometrics laws.

But a lot more than individual privacy is at stake. Such data will enable what human rights lawyer Brittan Heller has called “biometric psychography” referring to the gathering and use of biological data to reveal intimate details about a user’s likes, dislikes, preferences, and interests. In VR experiences, it is not only a user’s outward behavior that is captured, but also their emotional reactions to specific situations, through features such as pupil dilation or change in facial expressions….(More)”