Stefaan Verhulst
Ulises Ali Mejias at AlJazeera: “The recent coup in Bolivia reminds us that poor countries rich in resources continue to be plagued by the legacy of colonialism. Anything that stands in the way of a foreign corporation’s ability to extract cheap resources must be removed.
Today, apart from minerals and fossil fuels, corporations are after another precious resource: Personal data. As with natural resources, data too has become the target of extractive corporate practices.
As sociologist Nick Couldry and I argue in our book, The Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism, there is a new form of colonialism emerging in the world: data colonialism. By this, we mean a new resource-grab whereby human life itself has become a direct input into economic production in the form of extracted data.
We acknowledge that this term is controversial, given the extreme physical violence and structures of racism that historical colonialism employed. However, our point is not to say that data colonialism is the same as historical colonialism, but rather to suggest that it shares the same core function: extraction, exploitation, and dispossession.
Like classical colonialism, data colonialism violently reconfigures human relations to economic production. Things like land, water, and other natural resources were valued by native people in the precolonial era, but not in the same way that colonisers (and later, capitalists) came to value them: as private property. Likewise, we are experiencing a situation in which things that were once primarily outside the economic realm – things like our most intimate social interactions with friends and family, or our medical records – have now been commodified and made part of an economic cycle of data extraction that benefits a few corporations.
So what could countries in the Global South do to avoid the dangers of data colonialism?…(More)”.
Paper by Cecilia Güemes and Jorge Resina: “Trust is a key element in the co‐creation of solution for public problems. Working together is a gradual learning exercise that helps to shape emotions and attitudes and to create the foundations of trust. However, little is known about how institutions can promote trust. With the intention of going deeper into the subject, this paper focuses on a local experience in Spain: Madrid Escucha, a City Council initiative aimed at stimulating dialogue between officials and citizens around projects to improve city life. Three are our questions: who participate in these spaces, how the interactions are, and what advances are achieved. Based on qualitative research, empirical findings confirm a biased participation in this kind of scenarios as well as the presence of prejudices on both sides, an interaction characterised by initial idealism followed by discouragement and a possible readjustment, and a final satisfaction with the process even when results are not successful….(More)”.
Press Release: “The Mobility Data Collaborative (the Collaborative), a multi-sector forum with the goal of creating a framework to improve mobility through data, launches today…
New mobility services, such as shared cars, bikes, and scooters, are emerging and integrating into the urban transportation landscape across the globe. Data generated by these new mobility services offers an exciting opportunity to inform local policies and infrastructure planning. The Collaborative brings together key members from the public and private sectors to develop best practices to harness the potential of this valuable data to support safe, equitable, and livable streets.
The Collaborative will leverage the knowledge of its current and future members to solve the complex challenges facing shared mobility operators and the public agencies who manage access to infrastructure that these new services require. A critical component of this collaboration is providing an open and impartial forum for sharing information and developing best practices.
Membership is open to public agencies, nonprofits, academic institutions and private companies….(More)”.
MIT Technology Review: “The city told its employees to shut down their computers as a precaution this weekend after an attempted cyberattack on Friday.
The news: New Orleans spotted suspicious activity in its networks at around 5 a.m. on Friday, with a spike in the attempted attacks at 8 a.m. It detected phishing attempts and ransomware, Kim LaGrue, the city’s head of IT, later told reporters. Once they were confident the city was under attack, the team shut down its servers and computers. City authorities then filed a declaration of a state of emergency with the Civil District Court, and pulled local, state, and federal authorities into a (still pending) investigation of the incident. The city is still working to recover data from the attack but will be open as usual from this morning, Mayor LaToya Cantrell said on Twitter.
Was it ransomware? The nature of the attack is still something of a mystery. Cantrell confirmed that ransomware had been detected, but the city hasn’t received any demands for ransom money.
The positives: New Orleans was at least fairly well prepared for this attack, thanks to training for this scenario and its ability to operate many of its services without internet access, officials told reporters.
A familiar story: New Orleans is just the latest government to face ransomware attacks, after nearly two dozen cities in Texas were targeted in August, plus Louisiana in November (causing the governor to declare a state of emergency). The phenomenon goes beyond the US, too: in October Johannesburg became the biggest city yet to face a ransomware attack.…(More)”.
Daniel Arribas-Bel at Catapult: ‘When trying to understand something as complex as the city, every bit of data helps create a better picture. Researchers, practitioners and policymakers gather as much information as they can to represent every aspect of their city – from noise levels captured by open-source sensors and the study of social isolation using tweets to where the latest hipster coffee shop has opened – exploration and creativity seem to have no limits.
But what about imagery?
You might well ask, what type of images? How do you analyse them? What’s the point anyway?
Let’s start with the why. Images contain visual cues that encode a host of socio-economic information. Imagine a picture of a street with potholes outside a derelict house next to a burnt out car. It may be easy to make some fairly sweeping assumptions about the average income of its resident population. Or the image of a street with a trendy barber-shop next door to a coffee-shop with bare concrete feature walls on one side, and an independent record shop on the other. Again, it may be possible to describe the character of this area.
These are just some of the many kinds of signals embedded in image data. In fact, there is entire literature in geography and sociology that document these associations (see, for example, Cityscapes by Daniel Aaron Silver and Terry Nichols Clark for a sociology approach and The Predictive Postcode by Richard Webber and Roger Burrows for a geography perspective). Imagine if we could figure out ways to condense such information into formal descriptors of cities that help us measure aspects that traditional datasets can’t, or to update them more frequently than standard sources currently allow…(More)”.
Damian J. Ruck, Luke J. Matthews, Thanos Kyritsis, Quentin D. Atkinson & R. Alexander Bentley at Nature Human Behavior: “National democracy is a rare thing in human history and its stability has long been tied to the cultural values of citizens. Yet it has not been established whether changing cultural values made modern democracy possible or whether those values were a response to democratic institutions. Here we combine longitudinal data and cohort information of nearly 500,000 individuals from 109 nations to track the co-evolution of democratic values and institutions over the last century.
We find that cultural values of openness towards diversity predict a shift towards democracy and that nations with low institutional confidence are prone to political instability. In addition, the presence of democratic institutions did not predict any substantive changes in the measured cultural values. These results hold accounting for other factors, including gross domestic product per capita and non-independence between nations due to shared cultural ancestry. Cultural values lead to, rather than follow, the emergence of democracy. This indicates that current stable democracies will be under threat, should cultural values of openness to diversity and institutional confidence substantially decline… (More).”
Paper by Howard M. Erichson: “In Ashcroft v. Iqbal, building on Bell Atlantic v. Twombly, the Supreme Court instructed district courts to treat a complaint’s conclusions differently from allegations of fact. Facts, but not conclusions, are assumed true for purposes of a motion to dismiss. The Court did little to help judges or lawyers understand the elusive distinction, and, indeed, obscured the distinction with its language. The Court said it was distinguishing “legal conclusions” from factual allegations. The application in Twombly and Iqbal, however, shows that the relevant distinction is not between law and fact, but rather between different types of factual assertions. This essay, written for a symposium on the tenth anniversary of Ashcroft v. Iqbal, explores the definitional problem with the conclusion-fact distinction and examines how district courts have applied the distinction in recent cases….(More)”.
Report by Caitlin Chin at Brookings: “When it comes to gender stereotypes in occupational roles, artificial intelligence (AI) has the potential to either mitigate historical bias or heighten it. In the case of the Word2vec model, AI appears to do both.
Word2vec is a publicly available algorithmic model built on millions of words scraped from online Google News articles, which computer scientists commonly use to analyze word associations. In 2016, Microsoft and Boston University researchers revealed that the model picked up gender stereotypes existing in online news sources—and furthermore, that these biased word associations were overwhelmingly job related. Upon discovering this problem, the researchers neutralized the biased word correlations in their specific algorithm, writing that “in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society.”
Their study draws attention to a broader issue with artificial intelligence: Because algorithms often emulate the training datasets that they are built upon, biased input datasets could generate flawed outputs. Because many contemporary employers utilize predictive algorithms to scan resumes, direct targeted advertising, or even conduct face- or voice-recognition-based interviews, it is crucial to consider whether popular hiring tools might be susceptible to the same cultural biases that the researchers discovered in Word2vec.
In this paper, I discuss how hiring is a multi-layered and opaque process and how it will become more difficult to assess employer intent as recruitment processes move online. Because intent is a critical aspect of employment discrimination law, I ultimately suggest four ways upon which to include it in the discussion surrounding algorithmic bias….(More)”
This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI and Bias,” a series that explores ways to mitigate possible biases and create a pathway toward greater fairness in AI and emerging technologies.
Book by Denise Feldner: “This book offers readers a deeper understanding of the Cyberspace, of how institutions and industries are reinventing themselves, helping them excel in the transition to a fully digitally connected global economy. Though technology plays a key part in this regard, societal acceptance is the most important underlying condition, as it poses pressing challenges that cut across companies, developers, governments and workers.
The book explores the challenges and opportunities involved, current and potential future concepts, critical reflections and best practices. It addresses connected societies, new opportunities for governments, the role of trust in digital networks, and future education networks. In turn, a number of representative case studies demonstrate the current state of development in practice….(More)”.
Beth Noveck at The GovLab: “…The term, big health data, refers to the ability to gather and analyze vast quantities of online information about health, wellness and lifestyle. It includes not only our medical records but data from apps that track what we buy, how often we exercise and how well we sleep, among many other things. It provides an ocean of information about how healthy or ill we are, and unsurprisingly, doctors, medical researchers, healthcare organizations, insurance companies and governments are keen to get access to it. Should they be allowed to?
It’s a huge question, and AARP is partnering with GovLab to learn what older Americans think about it. AARP is a non-profit organization — the largest in the nation and the world — dedicated to empowering Americans to choose how they live as they age. In 2018 it had more than 38 million members. It is a key voice in policymaking in the United States, because it represents the views of people aged over 50 in this country.
From today, AARP and the GovLab are using the Internet to capture what AARP members feel are the most urgent issues confronting them to try to discover what worries people most: the use of big health data or the failure to use it.
The answers are not simple. On the one hand, increasing the use and sharing of data could enable doctors to make better diagnoses and interventions to prevent disease and make us healthier. It could lead medical researchers to find cures faster, while the creation of health data businesses could strengthen the economy.
On the other hand, the collection, sharing, and use of big health data could reveal sensitive personal information over which we have little control. This data could be sold without our consent, and be used by entities for surveillance or discrimination, rather than to promote well-being….(More)”.