Explore our articles
View All Results

Stefaan Verhulst

Paper by Brigham Daniels, Mark Buntaine and Tanner Bangerter: “In modern democracies, governmental transparency is thought to have great value. When it comes to addressing administrative corruption and mismanagement, many would agree with Justice Brandeis’s observation that sunlight is the best disinfectant. Beyond this, many credit transparency with enabling meaningful citizen participation.

But even though transparency appears highly correlated with successful governance in developed democracies, assumptions about administrative transparency have remained empirically untested. Testing effects of transparency would prove particularly helpful in developing democracies where transparency norms have not taken hold or only have done so slowly. In these contexts, does administrative transparency really create the sorts of benefits attributed to it? Transparency might grease the gears of developed democracies, but what good is grease when many of the gears seem to be broken or missing entirely?

This Article presents empirical results from a first-of-its-kind field study that tested two major promises of administrative transparency in a developing democracy: that transparency increases public participation in government affairs and that it increases government accountability. To test these hypotheses, we used two randomized controlled trials.

Surprisingly, we found transparency had no significant effect in almost any of our quantitative measurements, although our qualitative results suggested that when transparency interventions exposed corruption, some limited oversight could result. Our findings are particularly significant for developing democracies and show, at least in this context, that Justice Brandeis may have oversold the cleansing effects of transparency. A few rays of transparency shining light on government action do not disinfect the system and cure government corruption and mismanagement. Once corruption and mismanagement are identified, it takes effective government institutions and action from civil society to successfully act as a disinfectant….(More)”.

Testing Transparency

Paper by Ira Rubinstein and Bilyana Petkova: “Privacy — understood in terms of freedom from identification, surveillance and profiling — is a precondition of the diversity and tolerance that define the urban experience, But with “smart” technologies eroding the anonymity of city sidewalks and streets, and turning them into surveilled spaces, are cities the first to get caught in the line of fire? Alternatively, are cities the final bastions of privacy? Will the interaction of tech companies and city governments lead cities worldwide to converge around the privatization of public spaces and monetization of data with little to no privacy protections? Or will we see different city identities take root based on local resistance and legal action?

This Article delves into these questions from a federalist and localist angle. In contrast to other fields in which American cities lack the formal authority to govern, we show that cities still enjoy ample powers when it comes to privacy regulation. Fiscal concerns, rather than state or federal preemption, play a role in privacy regulation, and the question becomes one of how cities make use of existing powers. Populous cosmopolitan cities, with a sizeable market share and significant political and cultural clout, are in particularly noteworthy positions to take advantage of agglomeration effects and drive hard deals when interacting with private firms. Nevertheless, there are currently no privacy front runners or privacy laggards; instead, cities engage in “privacy activism” and “data stewardship.”

First, as privacy activists, U.S. cities use public interest litigation to defend their citizens’ personal information in high profile political participation and consumer protection cases. Examples include legal challenges to the citizenship question in the 2020 Census, and to instances of data breach including Facebook third-party data sharing practices and the Equifax data breach. We link the Census 2020 data wars to sanctuary cities’ battles with the federal administration to demonstrate that political dissent and cities’ social capital — diversity — are intrinsically linked to privacy. Regarding the string of data breach cases, cities expand their experimentation zone by litigating privacy interests against private parties.

Second, cities as data stewards use data to regulate their urban environment. As providers of municipal services, they collect, analyze and act on a broad range of data about local citizens or cut deals with tech companies to enhance transit, housing, utility, telecom, and environmental services by making them smart while requiring firms like Uber and Airbnb to share data with city officials. This has proven contentious at times but in both North American and European cities, open data and more cooperative forms of data sharing between the city, commercial actors, and the public have emerged, spearheaded by a transportation data trust in Seattle. This Article contrasts the Seattle approach with the governance and privacy deficiencies accompanying the privately-led Quayside smart city project in Toronto. Finally, this Article finds the data trust model of data sharing to hold promise, not least since the European rhetoric of exclusively city-owned data presented by Barcelona might prove difficult to realize in practice….(More)”.

Governing Privacy in the Datafied City

Report by the Stiftung Neue Verantwortung: “How easy it is to order a book on an online shop’s website, how intuitive maps or navigation services are to use in everyday life, or how laborious it is to set up a customer account for a car-sharing service, these features and ‘user flows’ have become incredibly important to the every customer. Today, the “user friendliness” of a digital platform or service can therefore have a significant influence on how well a product sells or what market share it gains. Therefore, not only operators of large online platforms, but also companies in more traditional sectors of the economy are increasing investments into designing websites, apps or software in such a way that they can be used easily, intuitively and as time-saving as possible. 

This approach to product design is called user-centered design (UX design) and is based on the observations of how people interact with digital products, developing prototypes and testing them in experiments. These methods are not only used to improve the user-friendliness of digital interfaces but also to improve certain performance indicators which are relevant to the business – whether it is raising the number of users who register as new customers, increasing the sales volume per user or encouraging as many users as possible to share personal data.

UX design as well as intensive testing and optimization of user interfaces has become a standard in today’s digital product development as well as an important growth-driver for many companies. However, this development also has a side effect: Since companies and users can have conflicting interests and needs with regard to the design of digital products or services, digital design practices which cause problems or even harm for users are spreading.

Examples of problematic design choices include warnings and countdowns that create time pressure in online shops, the design of settings-windows that make it difficult for users to activate data protection settings, or website architectures that make it extremely time-consuming to delete an account. These examples are called “dark patterns”, “Deceptive Design” or “Unethical Design” and are defined as design practices which, intentionally or intentionally, influence people to their disadvantage and potentially manipulate users in their behaviour or decisions….(More)”.

Dark Patterns: Regulating Digital Design

Amanda Rees at AEON: “…If big data could enable us to turn big history into mathematics rather than narratives, would that make it easier to operationalise our past? Some scientists certainly think so.

In February 2010, Peter Turchin, an ecologist from the University of Connecticut, predicted that 2020 would see a sharp increase in political volatility for Western democracies. Turchin was responding critically to the optimistic speculations of scientific progress in the journal Nature: the United States, he said, was coming to the peak of another instability spike (regularly occurring every 50 years or so), while the world economy was reaching the point of a ‘Kondratiev wave’ dip, that is, a steep downturn in a growth-driven supercycle. Along with a number of ‘seemingly disparate’ social pointers, all indications were that serious problems were looming. In the decade since that prediction, the entrenched, often vicious, social, economic and political divisions that have increasingly characterised North American and European society, have made Turchin’s ‘quantitative historical analysis’ seem remarkably prophetic.

A couple of years earlier, in July 2008, Turchin had made a series of trenchant claims about the nature and future of history. Totting up in excess of ‘200 explanations’ proposed to account for the fall of the Roman empire, he was appalled that historians were unable to agree ‘which explanations are plausible and which should be rejected’. The situation, he maintained, was ‘as risible as if, in physics, phlogiston theory and thermodynamics coexisted on equal terms’. Why, Turchin wanted to know, were the efforts in medicine and environmental science to produce healthy bodies and ecologies not mirrored by interventions to create stable societies? Surely it was time ‘for history to become an analytical, and even a predictive, science’. Knowing that historians were themselves unlikely to adopt such analytical approaches to the past, he proposed a new discipline: ‘theoretical historical social science’ or ‘cliodynamics’ – the science of history.

Like C P Snow 60 years before him, Turchin wanted to challenge the boundary between the sciences and humanities – even as periodic attempts to apply the theories of natural science to human behaviour (sociobiology, for example) or to subject natural sciences to the methodological scrutiny of the social sciences (science wars, anyone?) have frequently resulted in hostile turf wars. So what are the prospects for Turchin’s efforts to create a more desirable future society by developing a science of history?…

In 2010, Cliodynamics, the flagship journal for this new discipline, appeared, with its very first article (by the American sociologist Randall Collins) focusing on modelling victory and defeat in battle in relation to material resources and organisational morale. In a move that paralleled Comte’s earlier argument regarding the successive stages of scientific complexity (from physics, through chemistry and biology, to sociology), Turchin passionately rejected the idea that complexity made human societies unsuitable for quantitative analysis, arguing that it was precisely that complexity which made mathematics essential. Weather predictions were once considered unreliable because of the sheer complexity of managing the necessary data. But improvements in technology (satellites, computers) mean that it’s now possible to describe mathematically, and therefore to model, interactions between the system’s various parts – and therefore to know when it’s wise to carry an umbrella. With equal force, Turchin insisted that the cliodynamic approach was not deterministic. It would not predict the future, but instead lay out for governments and political leaders the likely consequences of competing policy choices.

Crucially, and again on the back of the abundantly available and cheap computer power, cliodynamics benefited from the surge in interest in the digital humanities. Existing archives were being digitised, uploaded and made searchable: every day, it seemed, more data were being presented in a format that encouraged quantification and enabled mathematical analysis – including the Old Bailey’s online database, of which Wolf had fallen foul. At the same time, cliodynamicists were repositioning themselves. Four years after its initial launch, the subtitle of their flagship journal was renamed, from The Journal of Theoretical and Mathematical History to The Journal of Quantitative History and Cultural Evolution. As Turchin’s editorial stated, this move was intended to position cliodynamics within a broader evolutionary analysis; paraphrasing the Russian-American geneticist Theodosius Dobzhansky, he claimed that ‘nothing in human history makes sense except in the light of cultural evolution’. Given Turchin’s ecological background, this evolutionary approach to history is unsurprising. But given the historical outcomes of making politics biological, it is potentially worrying….

Mathematical, data-driven, quantitative models of human experience that aim at detachment, objectivity and the capacity to develop and test hypotheses need to be balanced by explicitly fictional, qualitative and imaginary efforts to create and project a lived future that enable their audiences to empathically ground themselves in the hopes and fears of what might be to come. Both, after all, are unequivocally doing the same thing: using history and historical experience to anticipate the global future so that we might – should we so wish – avoid civilisation’s collapse. That said, the question of who ‘we’ are does, always, remain open….(More)”.

Are there laws of history?

Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.

The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.

One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.

“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.

In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.

A new EU agency for Artificial Intelligence?

One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.

“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.

The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.

In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.

Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.

In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.

MEPs chart path for a European approach to Artificial Intelligence

Paper by Bob Doherty et al: “In this article, we offer a contribution to the emerging debate on the role of citizen participation in food system policy making. A key driver is a recognition that solutions to complex challenges in the food system need the active participation of citizens to drive positive change. To achieve this, it is crucial to give citizens the agency in processes of designing policy interventions. This requires authentic and reflective engagement with citizens who are affected by collective decisions. One such participatory approach is citizen assemblies, which have been used to deliberate a number of key issues, including climate change by the UK Parliament’s House of Commons (House of Commons., 2019). Here, we have undertaken analysis of a citizen food assembly organized in the City of York (United Kingdom). This assembly was a way of hearing about a range of local food initiatives in Yorkshire, whose aim is to both relocalise food supply and production, and tackle food waste.

These innovative community-based business models, known as ‘food hubs’, are increasing the diversity of food supply, particularly in disadvantaged communities. Among other things, the assembly found that the process of design and sortation of the assembly is aided by the involvement of local stakeholders in the planning of the assembly. It also identified the potential for public procurement at the city level, to drive a more sustainable sourcing of food provision in the region. Furthermore, this citizen assembly has resulted in a galvanizing of individual agency with participants proactively seeking opportunities to create prosocial and environmental change in the food system….(More)”.

Citizen participation in food systems policy making: A case study of a citizens’ assembly

Essay by Geoff Mulgan: “Crises – whether wars or pandemics – can sometimes, though not always, fuel social imagination.  New arrangements have to be created at breakneck speed and old norms have to be discarded.  The deeper the crisis the more likely it is that people ask not for a return to normal but for a jump to something different and better.

So it is now.  Across the world countries are beginning to think about how life after COVID-19 might be different: could we use the crisis to solve the problems of carbon, low status for care-workers, or welfare states ill-suited to new forms of precariousness?  As this debate gathers speed, it’s opening up questions about the role of the social sciences. They’re playing a vital role in helping countries to manage the crisis, and to plan for recovery.  But how much are they there to understand the past and present – and how much should they help us to shape the future?

A century ago the answers were perhaps more obvious than today.  HG Wells early in the last century described sociology as ‘the description of the Ideal Society and its relation to existing societies’.  The founders of UCL in the mid-19th century and of LSE at the end of the 19th century, saw them as vehicles to change the world not just to interpret it.  It was taken for granted that social science should help map out possible futures – new rights, new forms of social policy, new ways of running economies.

Unfortunately, these traditions have largely atrophied.  Within academia you are far more likely to make a successful career analysing past patterns, or critiquing the present, than offering designs for the future.  That is partly the result of very healthy trends – in particular, more attention being paid to evidence and data.  But it’s left a gap since, by definition, there isn’t any hard evidence about a future that hasn’t yet happened.  There are a few small pockets of more speculative, future-oriented work in universities.  But they’re seen as quite marginal, and a fair proportion of this work is inward looking – feeding into academic journals and very small audiences – rather than feeding into political programmes and public imagination as happened in the past.  Meanwhile one of the less attractive legacies of several decades of post-structuralism and post-modernism is that many academics believe they have much more of a duty to critique than to propose or create.

Outside the academy the traditions of social imagination have also atrophied.  Political parties have largely closed down the research departments that once helped them think.  Thinktanks have become ever more locked into news cycles rather than long range thinking.

In the late 20th century the progressive movements of the left lost confidence in a forward march of history, and the green movements that have partly replaced them have proven more effective at persuading people of the likelihood of future ecological disaster than promoting positive alternatives (though the green visions of future arrangements for food, circular economies are a partial exception to the picture I’m describing here).  As a result much of the role of future imagination has been left to fiction.

One symptom is that many fewer people today can articulate a plausible and desirable better society than was the case 50 or 100 years ago.  Majorities in countries like the UK now expect their children to be worse off than they are….(More)”.

Social sciences and social imagination

Kim Stanley Robinson at the New Yorker: “…We are individuals first, yes, just as bees are, but we exist in a larger social body. Society is not only real; it’s fundamental. We can’t live without it. And now we’re beginning to understand that this “we” includes many other creatures and societies in our biosphere and even in ourselves. Even as an individual, you are a biome, an ecosystem, much like a forest or a swamp or a coral reef. Your skin holds inside it all kinds of unlikely coöperations, and to survive you depend on any number of interspecies operations going on within you all at once. We are societies made of societies; there are nothing but societies. This is shocking news—it demands a whole new world view. And now, when those of us who are sheltering in place venture out and see everyone in masks, sharing looks with strangers is a different thing. It’s eye to eye, this knowledge that, although we are practicing social distancing as we need to, we want to be social—we not only want to be social, we’ve got to be social, if we are to survive. It’s a new feeling, this alienation and solidarity at once. It’s the reality of the social; it’s seeing the tangible existence of a society of strangers, all of whom depend on one another to survive. It’s as if the reality of citizenship has smacked us in the face.

As for government: it’s government that listens to science and responds by taking action to save us. Stop to ponder what is now obstructing the performance of that government. Who opposes it?…

There will be enormous pressure to forget this spring and go back to the old ways of experiencing life. And yet forgetting something this big never works. We’ll remember this even if we pretend not to. History is happening now, and it will have happened. So what will we do with that?

A structure of feeling is not a free-floating thing. It’s tightly coupled with its corresponding political economy. How we feel is shaped by what we value, and vice versa. Food, water, shelter, clothing, education, health care: maybe now we value these things more, along with the people whose work creates them. To survive the next century, we need to start valuing the planet more, too, since it’s our only home.

It will be hard to make these values durable. Valuing the right things and wanting to keep on valuing them—maybe that’s also part of our new structure of feeling. As is knowing how much work there is to be done. But the spring of 2020 is suggestive of how much, and how quickly, we can change. It’s like a bell ringing to start a race. Off we go—into a new time….(More)”.

The Coronavirus Is Rewriting Our Imaginations

Will Douglas Heaven at MIT Technology Review: “In the week of April 12-18, the top 10 search terms on Amazon.com were: toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too—and in bulk. The majority of people looking for masks ended up buying the new Amazon #1 Best Seller, “Face Mask, Pack of 50”.

When covid-19 hit, we started buying things we’d never bought before. The shift was sudden: the mainstays of Amazon’s top ten—phone cases, phone chargers, Lego—were knocked off the charts in just a few days. Nozzle, a London-based consultancy specializing in algorithmic advertising for Amazon sellers, captured the rapid change in this simple graph.

It took less than a week at the end of February for the top 10 Amazon search terms in multiple countries to fill up with products related to covid-19. You can track the spread of the pandemic by what we shopped for: the items peaked first in Italy, followed by Spain, France, Canada, and the US. The UK and Germany lag slightly behind. “It’s an incredible transition in the space of five days,” says Rael Cline, Nozzle’s CEO. The ripple effects have been seen across retail supply chains.

But they have also affected artificial intelligence, causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should. 

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, “automation is in tailspin.” Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

What’s clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. “You can never sit and forget when you’re in such extraordinary circumstances,” says Cline….(More)”.

Our weird behavior during the pandemic is messing with AI models

Nigel Cory at ITIF: “If nations could regulate viruses the way many regulate data, there would be no global pandemics. But the sad reality is that, in the midst of the worst global pandemic in living memory, many nations make it unnecessarily complicated and costly, if not illegal, for health data to cross their borders. In so doing, they are hindering critically needed medical progress.

In the COVID-19 crisis, data analytics powered by artificial intelligence (AI) is critical to identifying the exact nature of the pandemic and developing effective treatments. The technology can produce powerful insights and innovations, but only if researchers can aggregate and analyze data from populations around the globe. And that requires data to move across borders as part of international research efforts by private firms, universities, and other research institutions. Yet, some countries, most notably China, are stopping health and genomic data at their borders.

Indeed, despite the significant benefits to companies, citizens, and economies that arise from the ability to easily share data across borders, dozens of countries—across every stage of development—have erected barriers to cross-border data flows. These data-residency requirements strictly confine data within a country’s borders, a concept known as “data localization,” and many countries have especially strict requirements for health data.

China is a noteworthy offender, having created a new digital iron curtain that requires data localization for a range of data types, including health data, as part of its so-called “cyber sovereignty” strategy. A May 2019 State Council regulation required genomic data to be stored and processed locally by Chinese firms—and foreign organizations are prohibited. This is in service of China’s mercantilist strategy to advance its domestic life sciences industry. While there has been collaboration between U.S. and Chinese medical researchers on COVID-19, including on clinical trials for potential treatments, these restrictions mean that it won’t involve the transfer, aggregation, and analysis of Chinese personal data, which otherwise might help find a treatment or vaccine. If China truly wanted to make amends for blocking critical information during the early stages of the outbreak in Wuhan, then it should abolish this restriction and allow genomic and other health data to cross its borders.

But China is not alone in limiting data flows. Russia requires all personal data, health-related or not, to be stored locally. India’s draft data protection bill permits the government to classify any sensitive personal data as critical personal data and mandate that it be stored and processed only within the country. This would be consistent with recent debates and decisions to require localization for payments data and other types of data. And despite its leading role in pushing for the free flow of data as part of new digital trade agreementsAustralia requires genomic and other data attached to personal electronic health records to be only stored and processed within its borders.

Countries also enact de facto barriers to health and genomic data transfers by making it harder and more expensive, if not impractical, for firms to transfer it overseas than to store it locally. For example, South Korea and Turkey require firms to get explicit consent from people to transfer sensitive data like genomic data overseas. Doing this for hundreds or thousands of people adds considerable costs and complexity.

And the European Union’s General Data Protection Regulation encourages data localization as firms feel pressured to store and process personal data within the EU given the restrictions it places on data transfers to many countries. This is in addition to the renewed push for local data storage and processing under the EU’s new data strategy.

Countries rationalize these steps on the basis that health data, particularly genomic data, is sensitive. But requiring health data to be stored locally does little to increase privacy or data security. The confidentiality of data does not depend on which country the information is stored in, only on the measures used to store it securely, such as via encryption, and the policies and procedures the firms follow in storing or analyzing the data. For example, if a nation has limits on the use of genomics data, then domestic organizations using that data face the same restrictions, whether they store the data in the country or outside of it. And if they share the data with other organizations, they must require those organizations, regardless of where they are located, to abide by the home government’s rules.

As such, policymakers need to stop treating health data differently when it comes to cross-border movement, and instead build technical, legal, and ethical protections into both domestic and international data-governance mechanisms, which together allow the responsible sharing and transfer of health and genomic data.

This is clearly possible—and needed. In February 2020, leading health researchers called for an international code of conduct for genomic data following the end of their first-of-its-kind international data-driven research project. The project used a purpose-built cloud service that stored 800 terabytes of genomic data on 2,658 cancer genomes across 13 data centers on three continents. The collaboration and use of cloud computing were transformational in enabling large-scale genomic analysis….(More)”.

Viruses Cross Borders. To Fight Them, Countries Must Let Medical Data Flow, Too

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday