Explore our articles
View All Results

Stefaan Verhulst

Paper by Ugo Pagallo: “The paper examines the legal and political impact of the Covid-19 crisis, drawing the attention to fundamental questions on authority and political legitimacy, coercion and obligation, power and cooperation. National states and sovereign governments have had and still will have a crucial role in re-establishing the public health sector and addressing the colossal challenges of economic re-construction. Scholars have accordingly discussed the set of legal means displayed during this crisis: emergency decrees, lockdowns, travel bans, and generally speaking, powers of the state of exception.

The aim of this paper is to stress the limits of such perspectives on powers of national governments and sovereigns, in order to illustrate what goes beyond such powers. Focus should be on the ontological, epistemic and normative constraints that affect today’s rights and duties of national states. Such constraints correspond to a class of problems that is complex, often transnational, and increasingly data-driven. In addition, we should not overlook the lessons learnt from such fields, as environmental law and internet governance, anti-terrorism and transnational business law, up to the regulation of Artificial Intelligence (AI). Such fields show that legal co-regulation and mechanisms of coordination and cooperation complement the traditional powers of national governments even in the times of the mother of all pandemics. The Covid-19 crisis has been often interpreted as if this were the last chapter of an on-going history about the Leviathan and its bio-powers. It is not. The crisis regards the end of the first chapter on the history of today’s information societies….(More)”.

Sovereigns, Viruses, and the Law: The Normative Challenges of Pandemic in Today’s Information Societies

Jen Kirby at Vox: “The green benches in the United Kingdom’s House of Commons were mostly empty, just Prime Minister Boris Johnson and a few members of Parliament, sitting spread out.

Speaker Lindsay Hoyle, wearing black robes, still commanded the room. But when it was time for a member of Parliament to ask a question, Hoyle glanced upward at a television screen mounted on the wood-paneled walls of the chamber.

On that screen appeared a member of Parliament — maybe with headphones, maybe just a tad too close to the camera, maybe framed with carefully curated bookshelf — ready to speak.

This is the so-called “Zoom” Parliament, which the UK first convened on April 22, turning the centuries-old democratic process into something that can be done, at least partially, from home.

The coronavirus pandemic has upended normalcy, and that includes the day-to-day functions of government. The social distancing measures and stay-at-home orders required to manage the virus’s spread has forced some governments to abruptly adopt new technologies and ways of working that would have been unimaginable just a few months ago.

From Brazil to Canada to the European Union, legislatures and parliaments have adopted some form of virtual government, whether for hearings and other official business, or even for voting. Several US states have also shifted to doing legislative work remotely, from New Jersey to Kentucky. And with the coronavirus making travel risky, diplomacy has also gone online, with everyone from the United Nations to the leaders of the G-7 meeting via computer screen.

Not every country or legislature has followed suit, most notably the US Congress, although advocates and some lawmakers are pushing to change this now. Even the US Supreme Court, long resistant to change, began hearing oral arguments this week via conference call, and livestreamed the audio with just a few, er, glitches.

This rapid shift to remote governance has largely done what it’s supposed to do: keep parliaments working during a crisis.In the UK, there have been a few technical difficulties, but it’s mostly succeeding.

“I think it does really well,” Chi Onwurah, a Labour MP and shadow minister for digital, science, and technology, who advocated for this move, told me. “Obviously, sometimes the technology doesn’t work or the audio is not very good or the broadband goes down.

“But, by and large,” she said, “we have MPs across the country putting questions to government and making democracy visible again.”

Governments may be Zooming or Google Hanging right now out of necessity, but once they get used to doing things this way (and get the mute button figured out), some elements of remote governance could end up outlasting this crisis. It won’t be a replacement for the real thing, and it probably shouldn’t be. But legislatures could certainly adopt at least some of these tools more permanently to help make democracy more accessible and transparent.

The holding-government-officials-accountable type of transparency, that is. Not the politician-accidentally-appearing-at-a-virtual-city-council-meeting, dusting-their-bookshelves-in-their-undies kind….

On Wednesday, Brazil’s Senate voted remotely again, approving an emergency transfer of resources to states to fight the coronavirus. It underscores a bizarre split in Brazil: Its Congress is using technology to try to govern aggressively during the pandemic. Its president, when asked last week about the country’s rising coronavirus death toll, replied, “So what? I’m sorry. What do you want me to do?”….

Beth Simone Noveck, director of New York University’s Governance Lab, told me that Brazil, along with some other countries, is ahead of the curve on this because it’s considered remote voting before.

But legislatures don’t necessarily need fancy apps to make this work. “Other places are doing voting in a very simple way — you’re on a Zoom, they turn on the camera and you put up your hand and you say ‘aye’ or ‘nay,’” Noveck said.

Brazil isn’t the only Latin American country that has quickly adapted to the constraints of the pandemic. On Tuesday, Argentina’s legislature held its first remote session. The Chamber of Deputies was transformed, with panels installed around the chamber to broadcast the faces of the 220 members of Congress, all dialing in from home….(More)”.

How to run the world remotely

Edd Gent at the BBC: “…There are already promising examples of how AI can help us better pool our unique capabilities. San Francisco start-up Unanimous AI has built an online platform that helps guide group decisions. They’ve looked to an unlikely place to guide their AI: the way honeybees make collective decisions.

“We went back to basics and said, ‘How does nature amplify the intelligence of groups?’,” says CEO Louis Rosenberg. “What nature does is form real-time systems, where the groups are interacting all at once together with feedback loops. So, they’re pushing and pulling on each other as a system, and converging on the best possible combination of their knowledge, wisdom, insight and intuition.”

Their Swarm AI platform presents groups with a question and places potential answers in different corners of their screen. Users control a virtual magnet with their mouse and engage in a tug of war to drag an ice hockey puck to the answer they think is correct. The system’s algorithm analyses how each user interacts with the puck – for instance, how much conviction they drag it with or how quickly they waver when they’re in the minority – and  uses this information to determine where the puck moves. That creates feedback loops in which each user is influenced by the choice and conviction of the others allowing the puck to end up at the answer best reflecting the collective wisdom of the group.

Several academic papers and high-profile clients who use the product back up the effectiveness of the Swarm AI platform. In one recent study, a group of traders were asked to forecast the weekly movement of several key stock market indices by trying to drag the puck to one of four answers — up or down by more than 4%, or up and down by less than 4%. With the tool, they boosted their accuracy by 36%.

Credit Suisse has used the platform to help investors forecast the performance of Asian markets; Disney has used it to predict the success of TV shows; and Unanimous has even partnered with Stanford Medical School to boost doctors’ ability to diagnose pneumonia from chest X-rays by 33%….(More)”

See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern and Identifying Citizens’ Needs by Combining Artificial Intelligence (AI) and Collective Intelligence (CI).

How AI can help us harness our ‘collective intelligence’

Paper by Anja Lambrecht and Catherine E. Tucker: “It is worrying to think that algorithms might discriminate against minority groups and reinforce existing inequality. Typically, such concerns have focused on the idea that the algorithm’s code could reflect bias, or the data that feeds the algorithm might lead the algorithm to produce uneven outcomes.

In this paper, we highlight another reason for why algorithms might appear biased against minority groups which is the length of time algorithms need to learn: if an algorithm has access to less data for particular groups, or accesses this data at differential speeds, it will produce differential outcomes, potentially disadvantaging minority groups.

Specifically, we revisit a classic study which documents that searches on Google for black names were more likely to return ads that highlighted the need for a criminal background check than searches for white names. We show that at least a partial explanation for this finding is that if consumer demand for a piece of information is low, an algorithm accumulates information at a lesser speed and thus takes longer to learn about consumer preferences. Since black names are less common, the algorithm learns about the quality of the underlying ad more slowly, and as a result an ad is more likely to persist for searches next to black names even if the algorithm judges the ad to be of low-quality. Therefore, the algorithm may be likely to show an ad — including an undesirable ad — in the context of searches for a disadvantaged group for a longer period of time.

We replicate this result using the context of religious affiliations and present evidence that ads targeted towards searches for religious groups persists for longer for groups that are less searched for. This suggests that the process of algorithmic learning can lead to differential outcomes across those whose characteristics are more common and those who are rarer in society….(More)”.

Apparent Algorithmic Bias and Algorithmic Learning

Paper by Brigham Daniels, Mark Buntaine and Tanner Bangerter: “In modern democracies, governmental transparency is thought to have great value. When it comes to addressing administrative corruption and mismanagement, many would agree with Justice Brandeis’s observation that sunlight is the best disinfectant. Beyond this, many credit transparency with enabling meaningful citizen participation.

But even though transparency appears highly correlated with successful governance in developed democracies, assumptions about administrative transparency have remained empirically untested. Testing effects of transparency would prove particularly helpful in developing democracies where transparency norms have not taken hold or only have done so slowly. In these contexts, does administrative transparency really create the sorts of benefits attributed to it? Transparency might grease the gears of developed democracies, but what good is grease when many of the gears seem to be broken or missing entirely?

This Article presents empirical results from a first-of-its-kind field study that tested two major promises of administrative transparency in a developing democracy: that transparency increases public participation in government affairs and that it increases government accountability. To test these hypotheses, we used two randomized controlled trials.

Surprisingly, we found transparency had no significant effect in almost any of our quantitative measurements, although our qualitative results suggested that when transparency interventions exposed corruption, some limited oversight could result. Our findings are particularly significant for developing democracies and show, at least in this context, that Justice Brandeis may have oversold the cleansing effects of transparency. A few rays of transparency shining light on government action do not disinfect the system and cure government corruption and mismanagement. Once corruption and mismanagement are identified, it takes effective government institutions and action from civil society to successfully act as a disinfectant….(More)”.

Testing Transparency

Paper by Ira Rubinstein and Bilyana Petkova: “Privacy — understood in terms of freedom from identification, surveillance and profiling — is a precondition of the diversity and tolerance that define the urban experience, But with “smart” technologies eroding the anonymity of city sidewalks and streets, and turning them into surveilled spaces, are cities the first to get caught in the line of fire? Alternatively, are cities the final bastions of privacy? Will the interaction of tech companies and city governments lead cities worldwide to converge around the privatization of public spaces and monetization of data with little to no privacy protections? Or will we see different city identities take root based on local resistance and legal action?

This Article delves into these questions from a federalist and localist angle. In contrast to other fields in which American cities lack the formal authority to govern, we show that cities still enjoy ample powers when it comes to privacy regulation. Fiscal concerns, rather than state or federal preemption, play a role in privacy regulation, and the question becomes one of how cities make use of existing powers. Populous cosmopolitan cities, with a sizeable market share and significant political and cultural clout, are in particularly noteworthy positions to take advantage of agglomeration effects and drive hard deals when interacting with private firms. Nevertheless, there are currently no privacy front runners or privacy laggards; instead, cities engage in “privacy activism” and “data stewardship.”

First, as privacy activists, U.S. cities use public interest litigation to defend their citizens’ personal information in high profile political participation and consumer protection cases. Examples include legal challenges to the citizenship question in the 2020 Census, and to instances of data breach including Facebook third-party data sharing practices and the Equifax data breach. We link the Census 2020 data wars to sanctuary cities’ battles with the federal administration to demonstrate that political dissent and cities’ social capital — diversity — are intrinsically linked to privacy. Regarding the string of data breach cases, cities expand their experimentation zone by litigating privacy interests against private parties.

Second, cities as data stewards use data to regulate their urban environment. As providers of municipal services, they collect, analyze and act on a broad range of data about local citizens or cut deals with tech companies to enhance transit, housing, utility, telecom, and environmental services by making them smart while requiring firms like Uber and Airbnb to share data with city officials. This has proven contentious at times but in both North American and European cities, open data and more cooperative forms of data sharing between the city, commercial actors, and the public have emerged, spearheaded by a transportation data trust in Seattle. This Article contrasts the Seattle approach with the governance and privacy deficiencies accompanying the privately-led Quayside smart city project in Toronto. Finally, this Article finds the data trust model of data sharing to hold promise, not least since the European rhetoric of exclusively city-owned data presented by Barcelona might prove difficult to realize in practice….(More)”.

Governing Privacy in the Datafied City

Report by the Stiftung Neue Verantwortung: “How easy it is to order a book on an online shop’s website, how intuitive maps or navigation services are to use in everyday life, or how laborious it is to set up a customer account for a car-sharing service, these features and ‘user flows’ have become incredibly important to the every customer. Today, the “user friendliness” of a digital platform or service can therefore have a significant influence on how well a product sells or what market share it gains. Therefore, not only operators of large online platforms, but also companies in more traditional sectors of the economy are increasing investments into designing websites, apps or software in such a way that they can be used easily, intuitively and as time-saving as possible. 

This approach to product design is called user-centered design (UX design) and is based on the observations of how people interact with digital products, developing prototypes and testing them in experiments. These methods are not only used to improve the user-friendliness of digital interfaces but also to improve certain performance indicators which are relevant to the business – whether it is raising the number of users who register as new customers, increasing the sales volume per user or encouraging as many users as possible to share personal data.

UX design as well as intensive testing and optimization of user interfaces has become a standard in today’s digital product development as well as an important growth-driver for many companies. However, this development also has a side effect: Since companies and users can have conflicting interests and needs with regard to the design of digital products or services, digital design practices which cause problems or even harm for users are spreading.

Examples of problematic design choices include warnings and countdowns that create time pressure in online shops, the design of settings-windows that make it difficult for users to activate data protection settings, or website architectures that make it extremely time-consuming to delete an account. These examples are called “dark patterns”, “Deceptive Design” or “Unethical Design” and are defined as design practices which, intentionally or intentionally, influence people to their disadvantage and potentially manipulate users in their behaviour or decisions….(More)”.

Dark Patterns: Regulating Digital Design

Amanda Rees at AEON: “…If big data could enable us to turn big history into mathematics rather than narratives, would that make it easier to operationalise our past? Some scientists certainly think so.

In February 2010, Peter Turchin, an ecologist from the University of Connecticut, predicted that 2020 would see a sharp increase in political volatility for Western democracies. Turchin was responding critically to the optimistic speculations of scientific progress in the journal Nature: the United States, he said, was coming to the peak of another instability spike (regularly occurring every 50 years or so), while the world economy was reaching the point of a ‘Kondratiev wave’ dip, that is, a steep downturn in a growth-driven supercycle. Along with a number of ‘seemingly disparate’ social pointers, all indications were that serious problems were looming. In the decade since that prediction, the entrenched, often vicious, social, economic and political divisions that have increasingly characterised North American and European society, have made Turchin’s ‘quantitative historical analysis’ seem remarkably prophetic.

A couple of years earlier, in July 2008, Turchin had made a series of trenchant claims about the nature and future of history. Totting up in excess of ‘200 explanations’ proposed to account for the fall of the Roman empire, he was appalled that historians were unable to agree ‘which explanations are plausible and which should be rejected’. The situation, he maintained, was ‘as risible as if, in physics, phlogiston theory and thermodynamics coexisted on equal terms’. Why, Turchin wanted to know, were the efforts in medicine and environmental science to produce healthy bodies and ecologies not mirrored by interventions to create stable societies? Surely it was time ‘for history to become an analytical, and even a predictive, science’. Knowing that historians were themselves unlikely to adopt such analytical approaches to the past, he proposed a new discipline: ‘theoretical historical social science’ or ‘cliodynamics’ – the science of history.

Like C P Snow 60 years before him, Turchin wanted to challenge the boundary between the sciences and humanities – even as periodic attempts to apply the theories of natural science to human behaviour (sociobiology, for example) or to subject natural sciences to the methodological scrutiny of the social sciences (science wars, anyone?) have frequently resulted in hostile turf wars. So what are the prospects for Turchin’s efforts to create a more desirable future society by developing a science of history?…

In 2010, Cliodynamics, the flagship journal for this new discipline, appeared, with its very first article (by the American sociologist Randall Collins) focusing on modelling victory and defeat in battle in relation to material resources and organisational morale. In a move that paralleled Comte’s earlier argument regarding the successive stages of scientific complexity (from physics, through chemistry and biology, to sociology), Turchin passionately rejected the idea that complexity made human societies unsuitable for quantitative analysis, arguing that it was precisely that complexity which made mathematics essential. Weather predictions were once considered unreliable because of the sheer complexity of managing the necessary data. But improvements in technology (satellites, computers) mean that it’s now possible to describe mathematically, and therefore to model, interactions between the system’s various parts – and therefore to know when it’s wise to carry an umbrella. With equal force, Turchin insisted that the cliodynamic approach was not deterministic. It would not predict the future, but instead lay out for governments and political leaders the likely consequences of competing policy choices.

Crucially, and again on the back of the abundantly available and cheap computer power, cliodynamics benefited from the surge in interest in the digital humanities. Existing archives were being digitised, uploaded and made searchable: every day, it seemed, more data were being presented in a format that encouraged quantification and enabled mathematical analysis – including the Old Bailey’s online database, of which Wolf had fallen foul. At the same time, cliodynamicists were repositioning themselves. Four years after its initial launch, the subtitle of their flagship journal was renamed, from The Journal of Theoretical and Mathematical History to The Journal of Quantitative History and Cultural Evolution. As Turchin’s editorial stated, this move was intended to position cliodynamics within a broader evolutionary analysis; paraphrasing the Russian-American geneticist Theodosius Dobzhansky, he claimed that ‘nothing in human history makes sense except in the light of cultural evolution’. Given Turchin’s ecological background, this evolutionary approach to history is unsurprising. But given the historical outcomes of making politics biological, it is potentially worrying….

Mathematical, data-driven, quantitative models of human experience that aim at detachment, objectivity and the capacity to develop and test hypotheses need to be balanced by explicitly fictional, qualitative and imaginary efforts to create and project a lived future that enable their audiences to empathically ground themselves in the hopes and fears of what might be to come. Both, after all, are unequivocally doing the same thing: using history and historical experience to anticipate the global future so that we might – should we so wish – avoid civilisation’s collapse. That said, the question of who ‘we’ are does, always, remain open….(More)”.

Are there laws of history?

Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.

The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.

One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.

“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.

In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.

A new EU agency for Artificial Intelligence?

One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.

“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.

The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.

In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.

Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.

In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.

MEPs chart path for a European approach to Artificial Intelligence

Paper by Bob Doherty et al: “In this article, we offer a contribution to the emerging debate on the role of citizen participation in food system policy making. A key driver is a recognition that solutions to complex challenges in the food system need the active participation of citizens to drive positive change. To achieve this, it is crucial to give citizens the agency in processes of designing policy interventions. This requires authentic and reflective engagement with citizens who are affected by collective decisions. One such participatory approach is citizen assemblies, which have been used to deliberate a number of key issues, including climate change by the UK Parliament’s House of Commons (House of Commons., 2019). Here, we have undertaken analysis of a citizen food assembly organized in the City of York (United Kingdom). This assembly was a way of hearing about a range of local food initiatives in Yorkshire, whose aim is to both relocalise food supply and production, and tackle food waste.

These innovative community-based business models, known as ‘food hubs’, are increasing the diversity of food supply, particularly in disadvantaged communities. Among other things, the assembly found that the process of design and sortation of the assembly is aided by the involvement of local stakeholders in the planning of the assembly. It also identified the potential for public procurement at the city level, to drive a more sustainable sourcing of food provision in the region. Furthermore, this citizen assembly has resulted in a galvanizing of individual agency with participants proactively seeking opportunities to create prosocial and environmental change in the food system….(More)”.

Citizen participation in food systems policy making: A case study of a citizens’ assembly

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday