Phones Know Who Went to an Abortion Clinic. Whom Will They Tell?


Patience Haggin at Wall Street Journal: “Concerns over collection and storage of reproductive health data is the latest challenge for the location-data industry, which over the past few years has faced scrutiny from lawmakers and regulators. Data-privacy laws in California and other states in recent years have placed new restrictions on the companies, such as requiring companies to give consumers the right to opt out of having their data sold.

The Federal Trade Commission last month said it would strictly enforce laws governing the collection, use and sharing of sensitive consumer data. “The misuse of mobile location and health information— including reproductive health data—exposes consumers to significant harm,” wrote Kristin Cohen, acting associate director for the commission’s division of privacy and identity protection.

Without clear regulations for the location-data industry’s data on abortion clinics, individual companies are determining how to respond to the implications of the Supreme Court ruling.

Alphabet Inc.’s Google recently said it would automatically delete visits to abortion clinics from its users’ location history.

Apple Inc. says it minimizes collection of personal data and that most location data is stored in ways the company can’t access. It has no way to access Health and Maps app data for people using updated operating systems, and can’t provide such data in response to government requests, the company says.

The vast location-data ecosystem includes many other lesser-known companies that are taking a different approach. A trade group for some of those firms, Network Advertising Initiative, announced a new set of voluntary standards for member companies in June, two days before the Dobbs ruling came out.

Participating companies, including Foursquare Labs Inc., Cuebiq Inc. and Precisely Inc.’s PlaceIQ, agreed not to use, sell or share precise location data about visits to sensitive locations—including abortion clinics—except to comply with a legal obligation…(More)”

Who Is Falling for Fake News?


Article by Angie Basiouny: “People who read fake news online aren’t doomed to fall into a deep echo chamber where the only sound they hear is their own ideology, according to a revealing new study from Wharton.

Surprisingly, readers who regularly browse fake news stories served up by social media algorithms are more likely to diversify their news diet by seeking out mainstream sources. These well-rounded news junkies make up more than 97% of online readers, compared with the scant 2.8% who consume online fake news exclusively.

“We find that these echo chambers that people worry about are very shallow. This idea that the internet is creating an echo chamber is just not holding out to be true,” said Senthil Veeraraghavan, a Wharton professor of operations, information and decisions.

Veeraraghavan is co-author of the paper, “Does Fake News Create Echo Chambers?” It was also written by Ken Moon, Wharton professor of operations, information and decisions, and Jiding Zhang, an assistant operations management professor at New York University Shanghai who earned her doctorate at Wharton.

The study, which examined the browsing activity of nearly 31,000 households during 2017, offers empirical evidence that goes against popular beliefs about echo chambers. While echo chambers certainly are dark and dangerous places, they aren’t metaphorical black holes that suck in every person who reads an article about, say, Obama birtherism theory or conspiracies about COVID-19 vaccines. The study found that households exposed to fake news actually increase their exposure to mainstream news by 9.1%.

“We were surprised, although we were very aware going in that there was much that we did not know,” Moon said. “One thing we wanted to see is how much fake news is out there. How do we figure out what’s fake and what’s not, and who is producing the fake news and why? The economic structure of that matters from a business perspective.”…(More)”

Sustaining Open Data as a Digital Common — Design principles for Common Pool Resources applied to Open Data Ecosystems


Paper by Johan Linåker, and Per Runeson: “Digital commons is an emerging phenomenon and of increasing importance, as we enter a digital society. Open data is one example that makes up a pivotal input and foundation for many of today’s digital services and applications. Ensuring sustainable provisioning and maintenance of the data, therefore, becomes even more important.

We aim to investigate how such provisioning and maintenance can be collaboratively performed in the community surrounding a common. Specifically, we look at Open Data Ecosystems (ODEs), a type of community of actors, openly sharing and evolving data on a technological platform.

We use Elinor Ostrom’s design principles for Common Pool Resources as a lens to systematically analyze the governance of earlier reported cases of ODEs using a theory-oriented software engineering framework.

We find that, while natural commons must regulate consumption, digital commons such as open data maintained by an ODE must stimulate both use and data provisioning. Governance needs to enable such stimulus while also ensuring that the collective action can still be coordinated and managed within the frame of available maintenance resources of a community. Subtractability is, in this sense, a concern regarding the resources required to maintain the quality and value of the data, rather than the availability of data. Further, we derive empirically-based recommended practices for ODEs based on the design principles by Ostrom for how to design a governance structure in a way that enables a sustainable and collaborative provisioning and maintenance of the data.

ODEs are expected to play a role in data provisioning which democratize the digital society and enables innovation from smaller commercial actors. Our empirically based guidelines intend to support this development…(More).

The New Moral Mathematics


Book Review by Kieran Setiya: “Space is big,” wrote Douglas Adams in The Hitchhiker’s Guide to the Galaxy (1979). “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”

What we do now affects future people in dramatic ways—above all, whether they will exist at all.

Time is big, too—even if we just think on the timescale of a species. We’ve been around for approximately 300,000 years. There are now about 8 billion of us, roughly 15 percent of all humans who have ever lived. You may think that’s a lot, but it’s just peanuts to the future. If we survive for another million years—the longevity of a typical mammalian species—at even a tenth of our current population, there will be 8 trillion more of us. We’ll be outnumbered by future people on the scale of a thousand to one.

What we do now affects those future people in dramatic ways: whether they will exist at all and in what numbers; what values they embrace; what sort of planet they inherit; what sorts of lives they lead. It’s as if we’re trapped on a tiny island while our actions determine the habitability of a vast continent and the life prospects of the many who may, or may not, inhabit it. What an awful responsibility.

This is the perspective of the “longtermist,” for whom the history of human life so far stands to the future of humanity as a trip to the chemist’s stands to a mission to Mars.

Oxford philosophers William MacAskill and Toby Ord, both affiliated with the university’s Future of Humanity Institute, coined the word “longtermism” five years ago. Their outlook draws on utilitarian thinking about morality. According to utilitarianism—a moral theory developed by Jeremy Bentham and John Stuart Mill in the nineteenth century—we are morally required to maximize expected aggregate well-being, adding points for every moment of happiness, subtracting points for suffering, and discounting for probability. When you do this, you find that tiny chances of extinction swamp the moral mathematics. If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction—a one in a million chance of saving at least 8 trillion lives—you should do the latter, allowing a million people to die.

Now, as many have noted since its origin, utilitarianism is a radically counterintuitive moral view. It tells us that we cannot give more weight to our own interests or the interests of those we love than the interests of perfect strangers. We must sacrifice everything for the greater good. Worse, it tells us that we should do so by any effective means: if we can shave 0.0001 percent off the probability of human extinction by killing a million people, we should—so long as there are no other adverse effects.

But even if you think we are allowed to prioritize ourselves and those we love, and not allowed to violate the rights of some in order to help others, shouldn’t you still care about the fate of strangers, even those who do not yet exist? The moral mathematics of aggregate well-being may not be the whole of ethics, but isn’t it a vital part? It belongs to the domain of morality we call “altruism” or “charity.” When we ask what we should do to benefit others, we can’t ignore the disquieting fact that the others who occupy the future may vastly outnumber those who occupy the present, and that their very existence depends on us.

From this point of view, it’s an urgent question how what we do today will affect the further future—urgent especially when it comes to what Nick Bostrom, the philosopher who directs the Future of Humanity Institute, calls the “existential risk” of human extinction. This is the question MacAskill takes up in his new book, What We Owe the Future, a densely researched but surprisingly light read that ranges from omnicidal pandemics to our new AI overlords without ever becoming bleak…(More)”.

Localising AI for crisis response


Report by Aleks Berditchevskaia and Kathy Peach, Isabel Stewart: “Putting power back in the hands of frontline humanitarians and local communities.

This report documents the results of a year-long project to design and evaluate new proof-of-concept Collective Crisis Intelligence tools. These are tools that combine data from crisis-affected communities with the processing power of AI to improve humanitarian action.

The two collective crisis intelligence tool prototypes developed were:

  • NFRI-Predict: a tool that predicts which non-food aid items (NFRI) are most needed by different types of households in different regions of Nepal after a crisis.
  • Report and Respond: a French language SMS-based tool that allows Red Cross volunteers in Cameroon to check the accuracy of COVID-19 rumours or misinformation they hear from the community while they’re in the field, and receive real-time guidance on appropriate responses.

Both tools were developed using Nesta’s Participatory AI methods, which aimed to address some of the risks associated with humanitarian AI by involving local communities in the design, development and evaluation of the new tools.

The project was a partnership between Nesta’s Centre for Collective Intelligence Design (CCID) and Data Analytics Practice (DAP), the Nepal Red Cross and Cameroon Red Cross, IFRC Solferino Academy, and Open Lab Newcastle University, and it was funded by the UK Humanitarian Innovation Hub.

We found that collective crisis intelligence:

  • has the potential to make local humanitarian action more timely and appropriate to local needs.
  • can transform locally-generated data to drive new forms of (anticipatory) action.

We found that participatory AI:

  • can overcome several critiques and limitations of AI – as well as helping to improve model performance.
  • helps to surface tensions between the assumptions and standards set by AI gatekeepers versus the pragmatic reality of implementation.
  • creates opportunities for building and sharing new capabilities among frontline staff and data scientists.

We also validated that collective crisis intelligence and participatory AI can help increase trust in AI tools, but more research is needed to untangle the factors that were responsible…(More)”.

Using Wikipedia for conflict forecasting


Article by Christian Oswald and Daniel Ohrenhofer: “How can we improve our ability to predict conflicts? Scholars have struggled with this question for a long time. However, as a discipline, and especially over the last two decades, political science has made substantial progress. In general, what we need to improve predictions are advances in data and methodology. Data advances involve both improving the quality of existing data and developing new data sources. We propose a new data source for conflict forecasting efforts: Wikipedia.

The number of country page views indicates international salience of, or interest in, a country. Meanwhile, the number of changes to a country page indicate political controversy between opposing political views.

We took part in the Violence Early-Warning System’s friendly competition to predict changes in battle-related deaths. In our work, we evaluate our findings with out-of-sample predictions using held-out, previously unseen data, and true forecasts into the future. We find support for the predictive power of country page views, whereas we do not for page changes…

Globally available data, updated monthly, are ideal for (near) real-time forecasting. However, many commonly used data sources are available only annually. They are updated once a year, often with considerable delay.

Some of these variables, such as democracy or GDP, tend to be relatively static over time. Furthermore, many data sources face the problem of missing values. These occur when it is not possible to find reliable data for a variable for a given country.

Wikipedia is updated in real time, unlike many commonly used data sources, which may update only annually and with considerable delay

More recent data sources such as Twitter, images or text as data, or mobile phone data, often do not provide global coverage. What’s more, collecting and manipulating data from such sources is typically computationally and/or financially costly. Wikipedia provides an alternative data source that, to some extent, overcomes many of these limitations…(More)”.

Does public opinion shape public policy? Effect of citizen dissent on legislative outcomes


Paper by Nara Park and Jihyun Ham: “In South Korea, the Advance Notice Legislation (ANL) system requires by law that a public announcement be issued on any proposed bill that is likely to affect the fundamental rights, duties, and/or daily life of the general public. By investigating the effects of public dissent submitted via the online ANL system in South Korea, this study attempts to address the critical issue of how to increase citizen participation in the political process and to offer a possible strategy that modern democratic governments can employ in this regard. The findings suggest that citizens will actively participate in the political process to make their voices heard when an appropriate participatory mechanism is available, but they will be more active if the administration encourages citizen participation with various policies and institutions. In other words, formal and informal institutions actively interact to affect the behavior of actors both within and outside the political arena…(More)”.

Designing Data Spaces: The Ecosystem Approach to Competitive Advantage


Open access book edited by Boris Otto, Michael ten Hompel, and Stefan Wrobel: “…provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries.

To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more.

Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty…(More)”.

Meet the new GDP prototype that tracks inequality


Article by Greg Rosalsky: “…Nearly a century after Kuznets pioneered the use of GDP, economists Thomas Blanchet, Emmanuel Saez, and Gabriel Zucman are trying to revolutionize it. In a new paper titled “Real-Time Inequality,” the economists imagine a new kind of GDP, one that isn’t merely a single number telling us about total economic growth, but a collection of numbers telling us where the gains from this growth are flowing. They already have a working prototype that they’ve published online, and it can provide some important insights about our economy right now…

Gabriel Zucman is an economist at UC Berkeley and the director of the James M. and Cathleen D. Stone Center on Wealth and Income Inequality. He has been working to transform government economic statistics — also called “national accounts” — for almost a decade. He says the national accounts offer the public valuable insights about economic growth. However, Zucman says, “The big problem is these data do not tell you who is benefiting from economic growth.”

America, of course, already has tons of data on inequality. The problem, Zucman says, is it usually takes a year or two for this data to be updated. “It’s not enough to come in two years after the policy battle, and say, ‘Look, this is what happened to inequality,'” Zucman says. “That’s too late.”

Their new project is an effort to fix this. Cobbling together data from a variety of official sources, Zucman and his colleagues have pioneered a method to compute in a more timely fashion how different income groups — like the working class and the middle class — are doing economically. They hope this prototype will inspire the federal government to follow suit and soon “produce numbers about how income is growing for each social group at the exact time when the Bureau of Economic Analysis releases its official GDP growth numbers.”

Zucman envisions a future where this data could inform and shape policy decisions. When considering policies like sending stimulus checks or providing tax relief, Zucman says, policymakers and voters need to know things like “which groups need more support, or whether the government may be actually overshooting, which might lead to inflation.”…(More)”.

How crowdfunding is shaping the war in Ukraine


The Economist: “This month Aerorozvidka, a Ukrainian drone unit, celebrated the acquisition of four Chinese-made DJI Phantom 3 drones, provided by a German donor. The group, founded in 2014 after the Russian invasion of eastern Ukraine and annexation of Crimea, is led by civilians. The gift is just one example of crowdfunding in Russia’s latest war against Ukraine. Citizens from both sides are supplying much-needed equipment to the front lines. What is the impact of these donations, and how do the two countries differ in their approach?

Private citizens have chipped in to help in times of war for centuries. A writing tablet found near Hadrian’s Wall in northern England mentions a gift of sandals, socks and underwear for Roman soldiers. During the first world war America’s government asked civilians to knit warm clothing for troops. But besides such small morale-boosting efforts, some schemes to rally civilians have proved strikingly productive. During the second world war Britain introduced a “Spitfire Fund”, encouraging civilian groups to raise the £12,600 (£490,000, or $590,000, in today’s money) needed to build the top-of-the-range fighter. Individual contributors could buy wings, machineguns or even a rivet, for six old pence (two and a half modern ones) apiece. The scheme raised around £13m in total—enough for more than 1,000 aircraft (of a total of 20,000 built)…(More)”.