The EU Wants to Build One of the World’s Largest Biometric Databases. What Could Possibly Go Wrong?


Grace Dobush at Fortune: “China and India have built the world’s largest biometric databases, but the European Union is about to join the club.

The Common Identity Repository (CIR) will consolidate biometric data on almost all visitors and migrants to the bloc, as well as some EU citizens—connecting existing criminal, asylum, and migration databases and integrating new ones. It has the potential to affect hundreds of millions of people.

The plan for the database, first proposed in 2016 and approved by the EU Parliament on April 16, was sold as a way to better track and monitor terrorists, criminals, and unauthorized immigrants.

The system will target the fingerprints and identity data for visitors and immigrants initially, and represents the first step towards building a truly EU-wide citizen database. At the same time, though, critics argue its mere existence will increase the potential for hacks, leaks, and law enforcement abuse of the information….

The European Parliament and the European Council have promised to address those concerns, through “proper safeguards” to protect personal privacy and to regulate officers’ access to data. In 2016, they passed a law regarding law enforcement’s access to personal data, alongside General Data Protection Regulation or GDPR.

But total security is a tall order. Germany is currently dealing with multipleinstances of police officers allegedly leaking personal information to far-right groups. Meanwhile, a Swedish hacker went to prison for hacking into Denmark’s public records system in 2012 and dumping online the personal data of hundreds of thousands of citizens and migrants….(More)”.


Facebook will open its data up to academics to see how it impacts elections


MIT Technology Review: “More than 60 researchers from 30 institutions will get access to Facebook user data to study its impact on elections and democracy, and how it’s used by advertisers and publishers.

A vast trove: Facebook will let academics see which websites its users linked to from January 2017 to February 2019. Notably, that means they won’t be able to look at the platform’s impact on the US presidential election in 2016, or on the Brexit referendum in the UK in the same year.

Despite this slightly glaring omission, it’s still hard to wrap your head around the scale of the data that will be shared, given that Facebook is used by 1.6 billion people every day. That’s more people than live in all of China, the most populous country on Earth. It will be one of the largest data sets on human behavior online to ever be released.

The process: Facebook didn’t pick the researchers. They were chosen by the Social Science Research Council, a US nonprofit. Facebook has been working on this project for over a year, as it tries to balance research interests against user privacy and confidentiality.

Privacy: In a blog post, Facebook said it will use a number of statistical techniques to make sure the data set can’t be used to identify individuals. Researchers will be able to access it only via a secure portal that uses a VPN and two-factor authentication, and there will be limits on the number of queries they can each run….(More)”.

Nagging misconceptions about nudge theory


Cass Sunstein at The Hill: “Nudges are private or public initiatives that steer people in particular directions but that also allow them to go their own way.

A reminder is a nudge; so is a warning. A GPS device nudges; a default rule, automatically enrolling people in some program, is a nudge.

To qualify as a nudge, an initiative must not impose significant economic incentives. A subsidy is not a nudge; a tax is not a nudge; a fine or a jail sentence is not a nudge. To count as such, a nudge must fully preserve freedom of choice.

In 2009, University of Chicago economist Richard Thaler and I co-wrote a book that drew on research in psychology and behavioral economics to help people and institutions, both public and private, improve their decision-making.

In the 10 years since “Nudge” was published, there has been an extraordinary outpouring of new thought and action, with particular reference to public policy.

Behavioral insight teams, or “nudge units” of various sorts, can be found in many nations, including Australia, Canada, Denmark, United Kingdom, the United States, the Netherlands, Germany, Singapore, Japan and Qatar.

Those teams are delivering. By making government more efficient, and by improving safety and health, they are helping to save a lot of money and a lot of lives. And in many countries, including the U.S., they don’t raise partisan hackles; both Democrats and Republicans have enthusiastically embraced them.   

Still, there are a lot of mistakes and misconceptions out there, and they are diverting attention and hence stalling progress. Here are the three big ones:

1. Nudges do not respect freedom. …

2. Nudges are based on excessive trust in government...

3. Nudges cannot achieve a whole lot.…(More)”.

Renovating Democracy: Governing in the Age of Globalization and Digital Capitalism


Book by Nathan Gardels and Nicolas Berggruen: “The rise of populism in the West and the rise of China in the East have stirred a rethinking of how democratic systems work—and how they fail. The impact of globalism and digital capitalism is forcing worldwide attention to the starker divide between the “haves” and the “have-nots,” challenging how we think about the social contract.

With fierce clarity and conviction, Renovating Democracy tears down our basic structures and challenges us to conceive of an alternative framework for governance. To truly renovate our global systems, the authors argue for empowering participation without populism by integrating social networks and direct democracy into the system with new mediating institutions that complement representative government. They outline steps to reconfigure the social contract to protect workers instead of jobs, shifting from a “redistribution” after wealth to “pre-distribution” with the aim to enhance the skills and assets of those less well-off. Lastly, they argue for harnessing globalization through “positive nationalism” at home while advocating for global cooperation—specifically with a partnership with China—to create a viable rules-based world order. 

Thought provoking and persuasive, Renovating Democracy serves as a point of departure that deepens and expands the discourse for positive change in governance….(More)”.

Black Wave: How Networks and Governance Shaped Japan’s 3/11 Disasters


Book by Daniel Aldrich: “Despite the devastation caused by the magnitude 9.0 earthquake and 60-foot tsunami that struck Japan in 2011, some 96% of those living and working in the most disaster-stricken region of Tōhoku made it through. Smaller earthquakes and tsunamis have killed far more people in nearby China and India. What accounts for the exceptionally high survival rate? And why is it that some towns and cities in the Tōhoku region have built back more quickly than others?

Black Wave illuminates two critical factors that had a direct influence on why survival rates varied so much across the Tōhoku region following the 3/11 disasters and why the rebuilding process has also not moved in lockstep across the region. Individuals and communities with stronger networks and better governance, Daniel P. Aldrich shows, had higher survival rates and accelerated recoveries. Less connected communities with fewer such ties faced harder recovery processes and lower survival rates. Beyond the individual and neighborhood levels of survival and recovery, the rebuilding process has varied greatly, as some towns and cities have sought to work independently on rebuilding plans, ignoring recommendations from the national governments and moving quickly to institute their own visions, while others have followed the guidelines offered by Tokyo-based bureaucrats for economic development and rebuilding….(More)”.

Crowdsourced reports could save lives when the next earthquake hits


Charlotte Jee at MIT Technology Review: “When it comes to earthquakes, every minute counts. Knowing that one has hit—and where—can make the difference between staying inside a building and getting crushed, and running out and staying alive. This kind of timely information can also be vital to first responders.

However, the speed of early warning systems varies from country to country. In Japan  and California, huge networks of sensors and seismic stations can alert citizens to an earthquake. But these networks are expensive to install and maintain. Earthquake-prone countries such as Mexico and Indonesia don’t have such an advanced or widespread system.

A cheap, effective way to help close this gap between countries might be to crowdsource earthquake reports and combine them with traditional detection data from seismic monitoring stations. The approach was described in a paper in Science Advances today.

The crowdsourced reports come from three sources: people submitting information using LastQuake, an app created by the Euro-Mediterranean Seismological Centre; tweets that refer to earthquake-related keywords; and the time and IP address data associated with visits to the EMSC website.

When this method was applied retrospectively to earthquakes that occurred in 2016 and 2017, the crowdsourced detections on their own were 85% accurate. Combining the technique with traditional seismic data raised accuracy to 97%. The crowdsourced system was faster, too. Around 50% of the earthquake locations were found in less than two minutes, a whole minute faster than with data provided only by a traditional seismic network.

When EMSC has identified a suspected earthquake, it sends out alerts via its LastQuake app asking users nearby for more information: images, videos, descriptions of the level of tremors, and so on. This can help assess the level of damage for early responders….(More)”.

Data-driven models of governance across borders


Introduction to Special Issue of FirstMonday, edited by Payal Arora and Hallam Stevens: “This special issue looks closely at contemporary data systems in diverse global contexts and through this set of papers, highlights the struggles we face as we negotiate efficiency and innovation with universal human rights and social inclusion. The studies presented in these essays are situated in diverse models of policy-making, governance, and/or activism across borders. Attention to big data governance in western contexts has tended to highlight how data increases state and corporate surveillance of citizens, affecting rights to privacy. By moving beyond Euro-American borders — to places such as Africa, India, China, and Singapore — we show here how data regimes are motivated and understood on very different terms….

To establish a kind of baseline, the special issue opens by considering attitudes toward big data in Europe. René König’s essay examines the role of “citizen conferences” in understanding the public’s view of big data in Germany. These “participatory technology assessments” demonstrated that citizens were concerned about the control of big data (should it be under the control of the government or individuals?), about the need for more education about big data technologies, and the need for more government regulation. Participants expressed, in many ways, traditional liberal democratic views and concerns about these technologies centered on individual rights, individual responsibilities, and education. Their proposed solutions too — more education and more government regulation — fit squarely within western liberal democratic traditions.

In contrast to this, Payal Arora’s essay draws us immediately into the vastly different contexts of data governance in India and China. India’s Aadhaar biometric identification system, through tracking its citizens with iris scanning and other measures, promises to root out corruption and provide social services to those most in need. Likewise, China’s emerging “social credit system,” while having immense potential for increasing citizen surveillance, offers ways of increasing social trust and fostering more responsible social behavior online and offline. Although the potential for authoritarian abuses of both systems is high, Arora focuses on how these technologies are locally understood and lived on an everyday basis, which spans from empowering to oppressing their people. From this perspective, the technologies offer modes of “disrupt[ing] systems of inequality and oppression” that should open up new conversations about what democratic participation can and should look like in China and India.

If China and India offer contrasting non-democratic and democratic cases, we turn next to a context that is neither completely western nor completely non-western, neither completely democratic nor completely liberal. Hallam Stevens’ account of government data in Singapore suggests the very different role that data can play in this unique political and social context. Although the island state’s data.gov.sg participates in global discourses of sharing, “open data,” and transparency, much of the data made available by the government is oriented towards the solution of particular economic and social problems. Ultimately, the ways in which data are presented may contribute to entrenching — rather than undermining or transforming — existing forms of governance. The account of data and its meanings that is offered here once again challenges the notion that such data systems can or should be understood in the same ways that similar systems have been understood in the western world.

If systems such as Aadhaar, “social credit,” and data.gov.sg profess to make citizens and governments more visible and legible, Rolien Hoyngexamines what may remain invisible even within highly pervasive data-driven systems. In the world of e-waste, data-driven modes of surveillance and logistics are critical for recycling. But many blind spots remain. Hoyng’s account reminds us that despite the often-supposed all-seeing-ness of big data, we should remain attentive to what escapes the data’s gaze. Here, in midst of datafication, we find “invisibility, uncertainty, and, therewith, uncontrollability.” This points also to the gap between the fantasies of how data-driven systems are supposed to work, and their realization in the world. Such interstices allow individuals — those working with e-waste in Shenzhen or Africa, for example — to find and leverage hidden opportunities. From this perspective, the “blind spots of big data” take on a very different significance.

Big data systems provide opportunities for some, but reduce those for others. Mark Graham and Mohammad Amir Anwar examine what happens when online outsourcing platforms create a “planetary labor market.” Although providing opportunities for many people to make money via their Internet connection, Graham and Anwar’s interviews with workers across sub-Saharan Africa demonstrate how “platform work” alters the balance of power between labor and capital. For many low-wage workers across the globe, the platform- and data-driven planetary labor market means downward pressure on wages, fewer opportunities to collectively organize, less worker agency, and less transparency about the nature of the work itself. Moving beyond bold pronouncements that the “world is flat” and big data as empowering, Graham and Anwar show how data-driven systems of employment can act to reduce opportunities for those residing in the poorest parts of the world. The affordances of data and platforms create a planetary labor market for global capital but tie workers ever-more tightly to their own localities. Once again, the valances of global data systems look very different from this “bottom-up” perspective.

Philippa Metcalfe and Lina Dencik shift this conversation from the global movement of labor to that of people, as they write about the implications of European datafication systems on the governance of refugees entering this region. This work highlights how intrinsic to datafication systems is the classification, coding, and collating of people to legitimize the extent of their belonging in the society they seek to live in. The authors argue that these datafied regimes of power have substantively increased their role in the regulating of human mobility in the guise of national security. These means of data surveillance can foster new forms of containment and entrapment of entire groups of people, creating further divides between “us” and “them.” Through vast interoperable databases, digital registration processes, biometric data collection, and social media identity verification, refugees have become some of the most monitored groups at a global level while at the same time, their struggles remain the most invisible in popular discourse….(More)”.

Know-how: Big Data, AI and the peculiar dignity of tacit knowledge


Essay by Tim Rogan: “Machine learning – a kind of sub-field of artificial intelligence (AI) – is a means of training algorithms to discern empirical relationships within immense reams of data. Run a purpose-built algorithm by a pile of images of moles that might or might not be cancerous. Then show it images of diagnosed melanoma. Using analytical protocols modelled on the neurons of the human brain, in an iterative process of trial and error, the algorithm figures out how to discriminate between cancers and freckles. It can approximate its answers with a specified and steadily increasing degree of certainty, reaching levels of accuracy that surpass human specialists. Similar processes that refine algorithms to recognise or discover patterns in reams of data are now running right across the global economy: medicine, law, tax collection, marketing and research science are among the domains affected. Welcome to the future, say the economist Erik Brynjolfsson and the computer scientist Tom Mitchell: machine learning is about to transform our lives in something like the way that steam engines and then electricity did in the 19th and 20th centuries. 

Signs of this impending change can still be hard to see. Productivity statistics, for instance, remain worryingly unaffected. This lag is consistent with earlier episodes of the advent of new ‘general purpose technologies’. In past cases, technological innovation took decades to prove transformative. But ideas often move ahead of social and political change. Some of the ways in which machine learning might upend the status quo are already becoming apparent in political economy debates.

The discipline of political economy was created to make sense of a world set spinning by steam-powered and then electric industrialisation. Its central question became how best to regulate economic activity. Centralised control by government or industry, or market freedoms – which optimised outcomes? By the end of the 20th century, the answer seemed, emphatically, to be market-based order. But the advent of machine learning is reopening the state vs market debate. Which between state, firm or market is the better means of coordinating supply and demand? Old answers to that question are coming under new scrutiny. In an eye-catching paper in 2017, the economists Binbin Wang and Xiaoyan Li at Sichuan University in China argued that big data and machine learning give centralised planning a new lease of life. The notion that market coordination of supply and demand encompassed more information than any single intelligence could handle would soon be proved false by 21st-century AI.

How seriously should we take such speculations? Might machine learning bring us full-circle in the history of economic thought, to where measures of economic centralisation and control – condemned long ago as dangerous utopian schemes – return, boasting new levels of efficiency, to constitute a new orthodoxy?

A great deal turns on the status of tacit knowledge….(More)”.

Data: The Lever to Promote Innovation in the EU


Blog Post by Juan Murillo Arias: “…But in order for data to truly become a lever that foments innovation in benefit of society as a whole, we must understand and address the following factors:

1. Disconnected, disperse sources. As users of digital services (transportation, finance, telecommunications, news or entertainment) we leave a different digital footprint for each service that we use. These footprints, which are different facets of the same polyhedron, can even be contradictory on occasion. For this reason, they must be seen as complementary. Analysts should be aware that they must cross data sources from different origins in order to create a reliable picture of our preferences, otherwise we will be basing decisions on partial or biased information. How many times do we receive advertising for items we have already purchased, or tourist destinations where we have already been? And this is just one example of digital marketing. When scoring financial solvency, or monitoring health, the more complete the digital picture is of the person, the more accurate the diagnosis will be.

Furthermore, from the user’s standpoint, proper management of their entire, disperse digital footprint is a challenge. Perhaps centralized consent would be very beneficial. In the financial world, the PSD2 regulations have already forced banks to open this information to other banks if customers so desire. Fostering competition and facilitating portability is the purpose, but this opening up has also enabled the development of new services of information aggregation that are very useful to financial services users. It would be ideal if this step of breaking down barriers and moving toward a more transparent market took place simultaneously in all sectors in order to avoid possible distortions to competition and by extension, consumer harm. Therefore, customer consent would open the door to building a more accurate picture of our preferences.

2. The public and private sectors’ asymmetric capacity to gather data.This is related to citizens using public services less frequently than private services in the new digital channels. However, governments could benefit from the information possessed by private companies. These anonymous, aggregated data can help to ensure a more dynamic public management. Even personal data could open the door to customized education or healthcare on an individual level. In order to analyze all of this, the European Commissionhas created a working group including 23 experts. The purpose is to come up with a series of recommendations regarding the best legal, technical and economic framework to encourage this information transfer across sectors.

3. The lack of incentives for companies and citizens to encourage the reuse of their data.The reality today is that most companies solely use the sources internally. Only a few have decided to explore data sharing through different models (for academic research or for the development of commercial services). As a result of this and other factors, the public sector largely continues using the survey method to gather information instead of reading the digital footprint citizens produce. Multiple studies have demonstrated that this digital footprint would be useful to describe socioeconomic dynamics and monitor the evolution of official statistical indicators. However, these studies have rarely gone on to become pilot projects due to the lack of incentives for a private company to open up to the public sector, or to society in general, making this new activity sustainable.

4. Limited commitment to the diversification of services.Another barrier is the fact that information based product development is somewhat removed from the type of services that the main data generators (telecommunications, banks, commerce, electricity, transportation, etc.) traditionally provide. Therefore, these data based initiatives are not part of their main business and are more closely tied to companies’ innovation areas where exploratory proofs of concept are often not consolidated as a new line of business.

5. Bidirectionality. Data should also flow from the public sector to the rest of society. The first regulatory framework was created for this purpose. Although it is still very recent (the PSI Directive on the re-use of public sector data was passed in 2013), it is currently being revised, in attempt to foster the consolidation of an open data ecosystem that emanates from the public sector as well. On the one hand it would enable greater transparency, and on the other, the development of solutions to improve multiple fields in which public actors are key, such as the environment, transportation and mobility, health, education, justice and the planning and execution of public works. Special emphasis will be placed on high value data sets, such as statistical or geospatial data — data with tremendous potential to accelerate the emergence of a wide variety of information based data products and services that add value.The Commission will begin working with the Member States to identify these data sets.

In its report, Creating Data through Open Data, the European open data portal estimates that government agencies making their data accessible will inject an extra €65 billion in the EU economy this year.

6. The commitment to analytical training and financial incentives for innovation.They are the key factors that have given rise to the digital unicorns that have emerged, more so in the U.S. and China than in Europe….(More)”

The Technology Trap: Capital, Labor, and Power in the Age of Automation


Book by Carl Benedikt Frey: “From the Industrial Revolution to the age of artificial intelligence, The Technology Trap takes a sweeping look at the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. As Carl Benedikt Frey shows, the Industrial Revolution created unprecedented wealth and prosperity over the long run, but the immediate consequences of mechanization were devastating for large swaths of the population. Middle-income jobs withered, wages stagnated, the labor share of income fell, profits surged, and economic inequality skyrocketed. These trends, Frey documents, broadly mirror those in our current age of automation, which began with the Computer Revolution.

Just as the Industrial Revolution eventually brought about extraordinary benefits for society, artificial intelligence systems have the potential to do the same. But Frey argues that this depends on how the short term is managed. In the nineteenth century, workers violently expressed their concerns over machines taking their jobs. The Luddite uprisings joined a long wave of machinery riots that swept across Europe and China. Today’s despairing middle class has not resorted to physical force, but their frustration has led to rising populism and the increasing fragmentation of society. As middle-class jobs continue to come under pressure, there’s no assurance that positive attitudes to technology will persist.
The Industrial Revolution was a defining moment in history, but few grasped its enormous consequences at the time. The Technology Trap demonstrates that in the midst of another technological revolution, the lessons of the past can help us to more effectively face the present….(More)”.