Computational Social Science for the Public Good: Towards a Taxonomy of Governance and Policy Challenges


Chapter by Stefaan G. Verhulst: “Computational Social Science (CSS) has grown exponentially as the process of datafication and computation has increased. This expansion, however, is yet to translate into effective actions to strengthen public good in the form of policy insights and interventions. This chapter presents 20 limiting factors in how data is accessed and analysed in the field of CSS. The challenges are grouped into the following six categories based on their area of direct impact: Data Ecosystem, Data Governance, Research Design, Computational Structures and Processes, the Scientific Ecosystem, and Societal Impact. Through this chapter, we seek to construct a taxonomy of CSS governance and policy challenges. By first identifying the problems, we can then move to effectively address them through research, funding, and governance agendas that drive stronger outcomes…(More)”. Full Book: Handbook of Computational Social Science for Policy

Predicting Socio-Economic Well-being Using Mobile Apps Data: A Case Study of France


Paper by Rahul Goel, Angelo Furno, and Rajesh Sharma: “Socio-economic indicators provide context for assessing a country’s overall condition. These indicators contain information about education, gender, poverty, employment, and other factors. Therefore, reliable and accurate information is critical for social research and government policing. Most data sources available today, such as censuses, have sparse population coverage or are updated infrequently. Nonetheless, alternative data sources, such as call data records (CDR) and mobile app usage, can serve as cost-effective and up-to-date sources for identifying socio-economic indicators.
This work investigates mobile app data to predict socio-economic features. We present a large-scale study using data that captures the traffic of thousands of mobile applications by approximately 30 million users distributed over 550,000 km square and served by over 25,000 base stations. The dataset covers the whole France territory and spans more than 2.5 months, starting from 16th March 2019 to 6th June 2019. Using the app usage patterns, our best model can estimate socio-economic indicators (attaining an R-squared score upto 0.66). Furthermore, using models’ explainability, we discover that mobile app usage patterns have the potential to reveal socio-economic disparities in IRIS. Insights of this study provide several avenues for future interventions, including users’ temporal network analysis and exploration of alternative data sources…(More)”.

The Smartness Mandate


Book by Orit Halpern and Robert Mitchell: “Smart phones. Smart cars. Smart homes. Smart cities. The imperative to make our world ever smarter in the face of increasingly complex challenges raises several questions: What is this “smartness mandate”? How has it emerged, and what does it say about our evolving way of understanding—and managing—reality? How have we come to see the planet and its denizens first and foremost as data-collecting instruments?

In The Smartness Mandate, Orit Halpern and Robert Mitchell radically suggest that “smartness” is not primarily a technology, but rather an epistemology. Through this lens, they offer a critical exploration of the practices, technologies, and subjects that such an understanding relies upon—above all, artificial intelligence and machine learning. The authors approach these not simply as techniques for solving problems of calculations, but rather as modes of managing life (human and other) in terms of neo-Darwinian evolution, distributed intelligences, and “resilience,” all of which have serious implications for society, politics, and the environment.

The smartness mandate constitutes a new form of planetary governance, and Halpern and Mitchell aim to map the logic of this seemingly inexorable and now naturalized demand to compute, illuminate the genealogy of how we arrived here, and point to alternative imaginaries of the possibilities and potentials of smart technologies and infrastructures…(More)”.

Who owns the map? Data sovereignty and government spatial data collection, use, and dissemination


Paper by Peter A. Johnson and Teresa Scassa: “Maps, created through the collection, assembly, and analysis of spatial data are used to support government planning and decision-making. Traditionally, spatial data used to create maps are collected, controlled, and disseminated by government, although over time, this role has shifted. This shift has been driven by the availability of alternate sources of data collected by private sector companies, and data contributed by volunteers to open mapping platforms, such as OpenStreetMap. In theorizing this shift, we provide examples of how governments use data sovereignty as a tool to shape spatial data collection, use, and sharing. We frame four models of how governments may navigate shifting spatial data sovereignty regimes; first, with government retaining complete control over data collection; second, with government contracting a third party to provide specific data collection services, but with data ownership and dissemination responsibilities resting with government; third, with government purchasing data under terms of access set by third party data collectors, who disseminate data to several parties, and finally, with government retreating from or relinquishing data sovereignty altogether. Within this rapidly changing landscape of data providers, we propose that governments must consider how to address data sovereignty concerns to retain their ability to control data use in the public interest…(More)”.

Why Europe must embrace participatory policymaking


Article by Alberto Alemanno, Claire Davenport, and Laura Batalla: “Today, Europe faces many threats – from economic uncertainty and war on its eastern borders to the rise of illiberal democracies and popular reactionary politicians.

As Europe recovers from the pandemic and grapples with economic and social unrest, it is at an inflection point; it can either create new spaces to build trust and a sense of shared purpose between citizens and governments, or it can continue to let its democratic institutions erode and distrust grow. 

The scale of such problems requires novel problem-solving and new perspectives, including those from civil society and citizens. Increased opportunities for citizens to engage with policymakers can lend legitimacy and accountability to traditionally ‘opaque’ policymaking processes. The future of the bloc hinges on its ability to not only sustain democratic institutions but to do so with buy-in from constituents.

Yet policymaking in the EU is often understood as a technocratic process that the public finds difficult, if not impossible, to navigate. The Spring 2022 Eurobarometer found that just 53% of respondents believed their voice counts in the EU. The issue is compounded by a lack of political literacy coupled with a dearth of channels for participation or co-creation. 

In parallel, there is a strong desire from citizens to make their voices heard. A January 2022 Special Eurobarometer on the Future of Europe found that 90% of respondents agreed that EU citizens’ voices should be taken more into account during decision-making. The Russian war in Ukraine has strengthened public support for the EU as a whole. According to the Spring 2022 Eurobarometer, 65% of Europeans view EU membership as a good thing. 

This is not to say that the EU has no existing models for citizen engagement. The European Citizens Initiative – a mechanism for petitioning the Commission to propose new laws – is one example of existing infrastructure. There is also an opportunity to build on the success of The Conference on the Future of Europe, a gathering held this past spring that gave citizens the opportunity to contribute policy recommendations and justifications alongside traditional EU policymakers…(More)”

The Autocrat in Your iPhone


Article by Ronald J. Deibert: “In the summer of 2020, a Rwandan plot to capture exiled opposition leader Paul Rusesabagina drew international headlines. Rusesabagina is best known as the human rights defender and U.S. Presidential Medal of Freedom recipient who sheltered more than 1,200 Hutus and Tutsis in a hotel during the 1994 Rwandan genocide. But in the decades after the genocide, he also became a prominent U.S.-based critic of Rwandan President Paul Kagame. In August 2020, during a layover in Dubai, Rusesabagina was lured under false pretenses into boarding a plane bound for Kigali, the Rwandan capital, where government authorities immediately arrested him for his affiliation with an opposition group. The following year, a Rwandan court sentenced him to 25 years in prison, drawing the condemnation of international human rights groups, the European Parliament, and the U.S. Congress. 

Less noted at the time, however, was that this brazen cross-border operation may also have employed highly sophisticated digital surveillance. After Rusesabagina’s sentencing, Amnesty International and the Citizen Lab at the University of Toronto, a digital security research group I founded and direct, discovered that smartphones belonging to several of Rusesabagina’s family members who also lived abroad had been hacked by an advanced spyware program called Pegasus. Produced by the Israel-based NSO Group, Pegasus gives an operator near-total access to a target’s personal data. Forensic analysis revealed that the phone belonging to Rusesabagina’s daughter Carine Kanimba had been infected by the spyware around the time her father was kidnapped and again when she was trying to secure his release and was meeting with high-level officials in Europe and the U.S. State Department, including the U.S. special envoy for hostage affairs. NSO Group does not publicly identify its government clients and the Rwandan government has denied using Pegasus, but strong circumstantial evidence points to the Kagame regime.

In fact, the incident is only one of dozens of cases in which Pegasus or other similar spyware technology has been found on the digital devices of prominent political opposition figures, journalists, and human rights activists in many countries. Providing the ability to clandestinely infiltrate even the most up-to-date smartphones—the latest “zero click” version of the spyware can penetrate a device without any action by the user—Pegasus has become the digital surveillance tool of choice for repressive regimes around the world. It has been used against government critics in the United Arab Emirates (UAE) and pro-democracy protesters in Thailand. It has been deployed by Mohammed bin Salman’s Saudi Arabia and Viktor Orban’s Hungary…(More)”.

Experiments of Living Constitutionalism


Paper by Cass R. Sunstein: “Experiments of Living Constitutionalism urges that the Constitution should be interpreted so as to allow both individuals and groups to experiment with different ways of living, whether we are speaking of religious practices, family arrangements, political associations, civic associations, child-rearing, schooling, romance, or work. Experiments of Living Constitutionalism prizes diversity and plurality; it gives pride of place to freedom of speech, freedom of association, and free exercise of religion (which it would protect against the imposition of secular values); it cherishes federalism; it opposes authoritarianism in all its forms. While Experiments of Living Constitutionalism has considerable appeal, my purpose in naming it is not to endorse or defend it, but as a thought experiment and to contrast it to Common Good Constitutionalism, with the aim of specifying the criteria on which one might embrace or defend any approach to constitutional law. My central conclusion is that we cannot know whether to accept or reject Experiments of Living Constitutionalism, Common Good Constitutionalism, Common Law Constitutionalism, democracy-reinforcing approaches, moral readings, originalism, or any other proposed approach without a concrete sense of what it entails – of what kind of constitutional order it would likely bring about or produce. No approach to constitutional interpretation can be evaluated without asking how it fits with the evaluator’s “fixed points,” which operate at multiple levels of generality. The search for reflective equilibrium is essential in deciding whether to accept a theory of constitutional interpretation…(More)”.

Database States


Essay by Sanjana Varghese: “In early 2007, a package sent from the north of England to the National Audit Office (NAO) in London went missing. In it were two discs containing the personal records of twenty-five million people—including their addresses, birthdays, and national insurance numbers, which are required to work in the UK—that the NAO intended to use for an “independent survey” of the child benefits database to check for supposed fraud. Instead, that information was never recovered, a national scandal ensued, and the junior official who mailed the package was fired.

The UK, as it turns out, is not particularly adept at securing its data. In 2009, a group of British academics released a report calling the UK a “database state,” citing the existence of forty-six leaky databases that were poorly constructed and badly maintained. Databases that they examined ranged from one on childhood obesity rates (which recorded the height and weight measurements of every school pupil in the UK between the ages of five and eleven) to IDENT1, a police database containing the fingerprints of all known offenders. “In too many cases,” the researchers wrote, “the public are neither served nor protected by the increasingly complex and intrusive holdings of personal information, invading every aspect of our lives.”

In the years since, databases in the UK—and elsewhere—have only proliferated; increasingly manufactured and maintained by a nexus of private actors and state agencies, they are generated by and produce more and more information streams that inevitably have a material effect on the populations they’re used by and against. More than just a neutral method of storing information, databases shape and reshape the world around us; they aid and abet the state and private industry in matters of surveillance, police violence, environmental destruction, border enforcement, and more…(More)”.

Government must earn public trust that AI is being used safely and responsibly


Article by Sue Bateman and Felicity Burch: “Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.

In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.

One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.

Enabling transparent public sector use of algorithms and AI is vital for a number of reasons. 

Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.

Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.

Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI. 

For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”…(More)”.

Kid-edited journal pushes scientists for clear writing on complex topics


Article by Mark Johnson: “The reviewer was not impressed with the paper written by Israeli brain researcher Idan Segev and a colleague from Switzerland.

“Professor Idan,” she wrote to Segev. “I didn’t understand anything that you said.”

Segev and co-author Felix Schürmann revised their paper on the Human Brain project, a massive effort seeking to channel all that we know about the mind into a vast computer model. But once again the reviewer sent it back. Still not clear enough. It took a third version to satisfy the reviewer.

“Okay,” said the reviewer, an 11-year-old girl from New York named Abby. “Now I understand.”

Such is the stringent editing process at the online science journal Frontiers for Young Minds, where top scientists, some of them Nobel Prize winners, submit papers on gene-editinggravitational waves and other topics — to demanding reviewers ages 8 through 15.

Launched in 2013, the Lausanne, Switzerland-based publication is coming of age at a moment when skeptical members of the public look to scientists for clear guidance on the coronavirus and on potentially catastrophic climate change, among other issues. At Frontiers for Young Minds, the goal is not just to publish science papers but also to make them accessible to young readers like the reviewers. In doing so, it takes direct aim at a long-standing problem in science — poor communication between professionals and the public.

“Scientists tend to default to their own jargon and don’t think carefully about whether this is a word that the public actually knows,” said Jon Lorsch, director of the National Institute of General Medical Sciences. “Sometimes to actually explain something you need a sentence as opposed to the one word scientists are using.”

Dense language sends a message “that science is for scientists; that you have to be an ‘intellectual’ to read and understand scientific literature; and that science is not relevant or important for everyday life,” according to a paper published last year in Advances in Physiology Education.

Frontiers for Young Minds, which has drawn nearly 30 million online page views in its nine years, offers a different message on its homepage: “Science for kids, edited by kids.”..(More)”.