Artificial Intelligence and Digital Repression: Global Challenges to Governance


Paper by Steven Feldstein: “Across the world, artificial intelligence (AI) is showing its potential for abetting repressive regimes and upending the relationship between citizen and state, thereby exacerbating a global resurgence of authoritarianism. AI is a component in a broader ecosystem of digital repression, but it is relevant to several different techniques, including surveillance, censorship, disinformation, and cyber attacks. AI offers three distinct advantages to autocratic leaders: it helps solve principal-agent loyalty problems, it offers substantial cost-efficiencies over traditional means of surveillance, and it is particularly effective against external regime challenges. China is a key proliferator of AI technology to authoritarian and illiberal regimes; such proliferation is an important component of Chinese geopolitical strategy. To counter the spread of high-tech repression abroad, as well as potential abuses at home, policy makers in democratic states must think seriously about how to mitigate harms and to shape better practices….(More)”

We’ll soon know the exact air pollution from every power plant in the world. That’s huge.


David Roberts at Vox: “A nonprofit artificial intelligence firm called WattTime is going to use satellite imagery to precisely track the air pollution (including carbon emissions) coming out of every single power plant in the world, in real time. And it’s going to make the data public.

This is a very big deal. Poor monitoring and gaming of emissions data have made it difficult to enforce pollution restrictions on power plants. This system promises to effectively eliminate poor monitoring and gaming of emissions data….

The plan is to use data from satellites that make theirs publicly available (like the European Union’s Copernicus network and the US Landsat network), as well as data from a few private companies that charge for their data (like Digital Globe). The data will come from a variety of sensors operating at different wavelengths, including thermal infrared that can detect heat.

The images will be processed by various algorithms to detect signs of emissions. It has already been demonstrated that a great deal of pollution can be tracked simply through identifying visible smoke. WattTime says it can also use infrared imaging to identify heat from smokestack plumes or cooling-water discharge. Sensors that can directly track NO2 emissions are in development, according to WattTime executive director Gavin McCormick.

Between visible smoke, heat, and NO2, WattTime will be able to derive exact, real-time emissions information, including information on carbon emissions, for every power plant in the world. (McCormick says the data may also be used to derive information about water pollutants like nitrates or mercury.)

Google.org, Google’s philanthropic wing, is getting the project off the ground (pardon the pun) with a $1.7 million grant; it was selected through the Google AI Impact Challenge….(More)”.

The State of Open Data


Open Access Book edited by Tim Davies, Stephen B. Walker, Mor Rubinstein and Fernando Perini: “It’s been ten years since open data first broke onto the global stage. Over the past decade, thousands of programmes and projects around the world have worked to open data and use it to address a myriad of social and economic challenges. Meanwhile, issues related to data rights and privacy have moved to the centre of public and political discourse. As the open data movement enters a new phase in its evolution, shifting to target real-world problems and embed open data thinking into other existing or emerging communities of practice, big questions still remain. How will open data initiatives respond to new concerns about privacy, inclusion, and artificial intelligence? And what can we learn from the last decade in order to deliver impact where it is most needed? 

The State of Open Data brings together over 60 authors from around the world to address these questions and to take stock of the real progress made to date across sectors and around the world, uncovering the issues that will shape the future of open data in the years to come….(More)”.

Ethics of identity in the time of big data


Paper by James Brusseau in First Monday: “Compartmentalizing our distinct personal identities is increasingly difficult in big data reality. Pictures of the person we were on past vacations resurface in employers’ Google searches; LinkedIn which exhibits our income level is increasingly used as a dating web site. Whether on vacation, at work, or seeking romance, our digital selves stream together.

One result is that a perennial ethical question about personal identity has spilled out of philosophy departments and into the real world. Ought we possess one, unified identity that coherently integrates the various aspects of our lives, or, incarnate deeply distinct selves suited to different occasions and contexts? At bottom, are we one, or many?

The question is not only palpable today, but also urgent because if a decision is not made by us, the forces of big data and surveillance capitalism will make it for us by compelling unity. Speaking in favor of the big data tendency, Facebook’s Mark Zuckerberg promotes the ethics of an integrated identity, a single version of selfhood maintained across diverse contexts and human relationships.

This essay goes in the other direction by sketching two ethical frameworks arranged to defend our compartmentalized identities, which amounts to promoting the dis-integration of our selves. One framework connects with natural law, the other with language, and both aim to create a sense of selfhood that breaks away from its own past, and from the unifying powers of big data technology….(More)”.

The future of work? Work of the future!


European Commission: “While historical evidence suggests that previous waves of automation have been overwhelmingly positive for the economy and society, AI is in a different league, with the potential to be much more disruptive. It builds upon other digital technologies but also brings about and amplifies major socioeconomic changes of its own.

What do recent technological developments in AI and robotisation mean for the economy, businesses and jobs? Should we be worried or excited? Which jobs will be destroyed and which new ones created? What should education systems, businesses, governments and social partners do to manage the coming transition successfully?
These are some of the questions considered by Michel Servoz, Senior Adviser on Artificial Intelligence, Robotics and the Future of Labour, in this in-depth study requested by European Commission President Jean-Claude Juncker….(More)”.

GIS and the 2020 Census


ESRI:GIS and the 2020 Census: Modernizing Official Statistics provides statistical organizations with the most recent GIS methodologies and technological tools to support census workers’ needs at all the stages of a census. Learn how to plan and carry out census work with GIS using new technologies for field data collection and operations management. International case studies illustrate concepts in practice….(More)”.

Missing Numbers


Introduction by Anna Powell-Smith of a new “blog on the data the government should collect, but doesn’t”: “…Over time, I started to notice a pattern. Across lots of different policy areas, it was impossible for governments to make good decisions because of a basic lack of data. There was always critical data that the state either didn’t collect at all, or collected so badly that it made change impossible.

Eventually, I decided that the power to not collect data is one of the most important and little-understood sources of power that governments have. This is why I’m writing Missing Numbers: to encourage others to ask “is this lack of data a deliberate ploy to get away with something”?

By refusing to amass knowledge in the first place, decision-makers exert power over over the rest of us. It’s time that this power was revealed, so we can have better conversations about what we need to know to run this country successfully.

A typical example

The government records and publishes data on how often each NHS hospital receives formal complaints. This is very helpful, because it means patients and the people who care for them can spot hospitals whose performance is worrying.

But the government simply doesn’t record data, even internally, on how often formal complaints are made about each Jobcentre. (That FOI response is from 2015, but I’ve confirmed it’s still true in 2019.) So it is impossible for it to know if some Jobcentres are being seriously mismanaged….(More)”.

Habeas Data: Privacy vs. The Rise of Surveillance Tech


Book by Cyrus Farivar: “Habeas Data shows how the explosive growth of surveillance technology has outpaced our understanding of the ethics, mores, and laws of privacy.

Award-winning tech reporter Cyrus Farivar makes the case by taking ten historic court decisions that defined our privacy rights and matching them against the capabilities of modern technology. It’s an approach that combines the charge of a legal thriller with the shock of the daily headlines.

Chapters include: the 1960s prosecution of a bookie that established the “reasonable expectation of privacy” in nonpublic places beyond your home (but how does that ruling apply now, when police can chart your every move and hear your every conversation within your own home — without even having to enter it?); the 1970s case where the police monitored a lewd caller — the decision of which is now the linchpin of the NSA’s controversial metadata tracking program revealed by Edward Snowden; and a 2010 low-level burglary trial that revealed police had tracked a defendant’s past 12,898 locations before arrest — an invasion of privacy grossly out of proportion to the alleged crime, which showed how authorities are all too willing to take advantage of the ludicrous gap between the slow pace of legal reform and the rapid transformation of technology.

A dazzling exposé that journeys from Oakland, California to the halls of the Supreme Court to the back of a squad car, Habeas Data combines deft reportage, deep research, and original interviews to offer an X-ray diagnostic of our current surveillance state….(More)”.

The EU Wants to Build One of the World’s Largest Biometric Databases. What Could Possibly Go Wrong?


Grace Dobush at Fortune: “China and India have built the world’s largest biometric databases, but the European Union is about to join the club.

The Common Identity Repository (CIR) will consolidate biometric data on almost all visitors and migrants to the bloc, as well as some EU citizens—connecting existing criminal, asylum, and migration databases and integrating new ones. It has the potential to affect hundreds of millions of people.

The plan for the database, first proposed in 2016 and approved by the EU Parliament on April 16, was sold as a way to better track and monitor terrorists, criminals, and unauthorized immigrants.

The system will target the fingerprints and identity data for visitors and immigrants initially, and represents the first step towards building a truly EU-wide citizen database. At the same time, though, critics argue its mere existence will increase the potential for hacks, leaks, and law enforcement abuse of the information….

The European Parliament and the European Council have promised to address those concerns, through “proper safeguards” to protect personal privacy and to regulate officers’ access to data. In 2016, they passed a law regarding law enforcement’s access to personal data, alongside General Data Protection Regulation or GDPR.

But total security is a tall order. Germany is currently dealing with multipleinstances of police officers allegedly leaking personal information to far-right groups. Meanwhile, a Swedish hacker went to prison for hacking into Denmark’s public records system in 2012 and dumping online the personal data of hundreds of thousands of citizens and migrants….(More)”.


Digital inequalities in the age of artificial intelligence and big data


Paper by Christoph Lutz: “In this literature review, I summarize key concepts and findings from the rich academic literature on digital inequalities. I propose that digital inequalities research should look more into labor‐ and big data‐related questions such as inequalities in online labor markets and the negative effects of algorithmic decision‐making for vulnerable population groups.

The article engages with the sociological literature on digital inequalities and explains the general approach to digital inequalities, based on the distinction of first‐, second‐, and third‐level digital divides. First, inequalities in access to digital technologies are discussed. This discussion is extended to emerging technologies, including the Internet‐of‐things and artificial intelligence‐powered systems such as smart speakers. Second, inequalities in digital skills and technology use are reviewed and connected to the discourse on new forms of work such as the sharing economy or gig economy. Third and finally, the discourse on the outcomes, in the form of benefits or harms, from digital technology use is taken up.

Here, I propose to integrate the digital inequalities literature more strongly with critical algorithm studies and recent discussions about datafication, digital footprints, and information privacy….(More)”.