Stefaan Verhulst
Study by the European Parliament Research Service: “Public powers are currently facing extraordinary challenges, from finding ways to revive economic growth without damaging the environment, to managing a global health crisis, combating inequality and securing peace. In the coming decades, public regulators, and with them academics, civil society actors and corporate powers, will confront another dilemma that is fast becoming a clear and present challenge. This is whether to protect the current structures of democratic governance,despite the widespread perception of their inefficiency,or adapt them to fast-changing scenarios (but, in doing so, take the risk of further weakening democracy).
The picture is blurred, with diverging trends. On the one hand, the classic interest-representation model is under strain. Low voter turnouts, rising populist (or anti-establishment) political movements and widespread discontent towards public institutions are stress-testing the foundations of democratic systems. Democracy, ever-louder voices argue, is a mere chimera, and citizens have little meaningful impact on the public decision-making process. Therefore, critics suggest, alternatives to the democratic model must be considered if countries are to navigate future challenges. However, the reality is more complex. Indeed, the decay of democratic values is unambiguously rejected by the birth of new grassroots movements, evidenced by record-speed civic mobilisation (especially among the young) and sustained by widespread street protest. Examined more closely, these events show that global demand for participation is alive and kicking.
The clash between these two opposing trends raises a number of questions that policy-makers and analysts must answer. First, will new, hybrid, forms of democratic participation replace classic representation systems? Second, amid transformative processes, how will power-roles be redistributed? A third set of questions looks at what is driving the transformation of democratic systems. As the venues of political discussion and interaction move from town halls and meeting rooms to online forums, it becomes critical to understand whether innovative democratic practices will be implemented almost exclusively through impersonal, ascetic, digital platforms; or, whether civic engagement will still be nurtured through in-person, local forums built to encourage debate.
This study begins by looking at the latest developments in the academic and institutional debates on democratic participation and civic engagement. Contributing to the crisis of traditional democratic models are political apathy and declining trust in political institutions, changes in methods of producing and sharing knowledge, and the pervasive nature of technology. How are public institutions reacting to these disruptive changes? The central part of this study examines a sample of initiatives trialled by public administrations (local, national and supranational) to engage citizens in policy-making. These initiatives are categorised by three criteria: first, the depth and complexity of cooperation between public structures and private actors; second, the design of procedures and structures of participation; and,third, the level of politicisation of the consultations, as well as the attractiveness of certain topics compared with others.
This analysis is intended to contribute to the on-going debate on the democratisation of the European Union (EU). The planned Conference on the Future of Europe, the recent reform of the European Citizens’ Initiative, and on-going debates on how to improve the transparency of EU decision-making are all designed to revive the civic spirit of the European public. These efforts notwithstanding, severe political, economic and societal challenges are jeopardising the very ideological foundations of the Union. The on-going coronavirus pandemic has placed the EU’s effectiveness under scrutiny once again. By appraising and applying methods tested by public sector institutions to engage citizens in policy-making, the EU could boost its chances of accomplishing its political mandate with success….(More)”
Byron Tau at the Wall Street Journal: “The Internal Revenue Service attempted to identify and track potential criminal suspects by purchasing access to a commercial database that records the locations of millions of American cellphones.
The IRS Criminal Investigation unit, or IRS CI, had a subscription to access the data in 2017 and 2018, and the way it used the data was revealed last week in a briefing by IRS CI officials to Sen. Ron Wyden’s (D., Ore.) office. The briefing was described to The Wall Street Journal by an aide to the senator.
IRS CI officials told Mr. Wyden’s office that their lawyers had given verbal approval for the use of the database, which is sold by a Virginia-based government contractor called Venntel Inc. Venntel obtains anonymized location data from the marketing industry and resells it to governments. IRS CI added that it let its Venntel subscription lapse after it failed to locate any targets of interest during the year it paid for the service, according to Mr. Wyden’s aide.
Justin Cole, a spokesman for IRS CI, said it entered into a “limited contract with Venntel to test their services against the law enforcement requirements of our agency.” IRS CI pursues the most serious and flagrant violations of tax law, and it said it used the Venntel database in “significant money-laundering, cyber, drug and organized-crime cases.”
The episode demonstrates a growing law enforcement interest in reams of anonymized cellphone movement data collected by the marketing industry. Government entities can try to use the data to identify individuals—which in many cases isn’t difficult with such databases.
It also shows that data from the marketing industry can be used as an alternative to obtaining data from cellphone carriers, a process that requires a court order. Until 2018, prosecutors needed “reasonable grounds” to seek cell tower records from a carrier. In June 2018, the U.S. Supreme Court strengthened the requirement to show probable cause a crime has been committed before such data can be obtained from carriers….(More)”
Article by Barbara Fister: “We are living in an “age of algorithms.” Vast quantities of information are collected, sorted, shared, combined, and acted on by proprietary black boxes. These systems use machine learning to build models and make predictions from data sets that may be out of date, incomplete, and biased. We will explore the ways bias creeps into information systems, take a look at how “big data,” artificial intelligence and machine learning often amplify bias unwittingly, and consider how these systems can be deliberately exploited by actors for whom bias is a feature, not a bug. Finally, we’ll discuss ways we can work with our communities to create a more fair and just information environment….(More)”.
Book by Oliver James, Asmus Leth Olsen, Donald Moynihan, and Gregg G. Van Ryzin: “A revolution in the measurement and reporting of government performance through the use of published metrics, rankings and reports has swept the globe at all levels of government. Performance metrics now inform important decisions by politicians, public managers and citizens.
However, this performance movement has neglected a second revolution in behavioral science that has revealed cognitive limitations and biases in people’s identification, perception, understanding and use of information. This Element introduces a new approach – behavioral public performance – that connects these two revolutions. Drawing especially on evidence from experiments, this approach examines the influence of characteristics of numbers, subtle framing of information, choice of benchmarks or comparisons, human motivation and information sources. These factors combine with the characteristics of information users and the political context to shape perceptions, judgment and decisions. Behavioral public performance suggests lessons to improve design and use of performance metrics in public management and democratic accountability….(More)”.
Report by the World Economic Forum: “The costs to society of public-sector corruption and weak accountability are staggering. In many parts of the world, public-sector corruption is the single-largest challenge, stifling social, economic and environmental development. Often, corruption centres around a lack of transparency, inadequate record-keeping and low public accountability.
Blockchain and distributed ledger technologies, when applied thoughtfully to certain corruption-prone government processes, can potentially increase transparency and accountability in these systems, reducing the risk or prevalence of corrupt activity.
In partnership with the Inter-American Development Bank (IDB) and the Office of the Inspector General of Colombia (Procuraduría General de Colombia), the Forum has led a multistakeholder team to investigate, design and trial the use of blockchain technology for corruption-prone government processes, anchored in the use case of public procurement.
Using cryptography and distributed consensus mechanisms, blockchain provides the unique combination of permanent and tamper-evident record-keeping, transaction transparency and auditability, automated functions with “smart contracts”, and the reduction of centralized authority and information ownership within processes. These properties make blockchain a high potential emerging technology to address corruption. The project chose to focus on the public procurement process because it constitutes one of the largest sites of corruption globally, stands to benefit from these technology properties and plays a significant role in serving public interest…(More)”.
David Roodman at Open Philanthropy: “… How much should we care about people who will live far in the future? Or about chickens today? What events could extinguish civilization? Could artificial intelligence (AI) surpass human intelligence?
One strand of analysis that has caught our attention is about the pattern of growth of human society over many millennia, as measured by number of people or value of economic production. Perhaps the mathematical shape of the past tells us about the shape of the future. I dug into that subject. A draft of my technical paper is here. (Comments welcome.) In this post, I’ll explain in less technical language what I learned.
It’s extraordinary that the larger the human economy has become—the more people and the more goods and services they produce—the faster it has grown on average. Now, especially if you’re reading quickly, you might think you know what I mean. And you might be wrong, because I’m not referring to exponential growth. That happens when, for example, the number of people carrying a virus doubles every week. Then the growth rate (100% increase per week) holds fixed. The human economy has grown super-exponentially. The bigger it has gotten, the faster it has doubled, on average. The global economy churned out $74 trillion in goods and services in 2019, twice as much as in 2000.1 Such a quick doubling was unthinkable in the Middle Ages and ancient times. Perhaps our earliest doublings took millennia.
If global economic growth keeps accelerating, the future will differ from the present to a mind-boggling degree. The question is whether there might be some plausibility in such a prospect. That is what motivated my exploration of the mathematical patterns in the human past and how they could carry forward. Having now labored long on the task, I doubt I’ve gained much perspicacity. I did come to appreciate that any system whose rate of growth rises with its size is inherently unstable. The human future might be one of explosion, perhaps an economic upwelling that eclipses the industrial revolution as thoroughly as it eclipsed the agricultural revolution. Or the future could be one of implosion, in which environmental thresholds are crossed or the creative process that drives growth runs amok, as in an AI dystopia. More likely, these impulses will mix.
I now understand more fully a view that shapes the work of Open Philanthropy. The range of possible futures is wide. So it is our task as citizens and funders, at this moment of potential leverage, to lower the odds of bad paths and raise the odds of good ones….(More)”.
Heidi Ledford at Nature: “Elizaveta Sivak spent nearly a decade training as a sociologist. Then, in the middle of a research project, she realized that she needed to head back to school.
Sivak studies families and childhood at the National Research University Higher School of Economics in Moscow. In 2015, she studied the movements of adolescents by asking them in a series of interviews to recount ten places that they had visited in the past five days. A year later, she had analysed the data and was feeling frustrated by the narrowness of relying on individual interviews, when a colleague pointed her to a paper analysing data from the Copenhagen Networks Study, a ground-breaking project that tracked the social-media contacts, demographics and location of about 1,000 students, with five-minute resolution, over five months1. She knew then that her field was about to change. “I realized that these new kinds of data will revolutionize social science forever,” she says. “And I thought that it’s really cool.”
With that, Sivak decided to learn how to program, and join the revolution. Now, she and other computational social scientists are exploring massive and unruly data sets, extracting meaning from society’s digital imprint. They are tracking people’s online activities; exploring digitized books and historical documents; interpreting data from wearable sensors that record a person’s every step and contact; conducting online surveys and experiments that collect millions of data points; and probing databases that are so large that they will yield secrets about society only with the help of sophisticated data analysis.
Over the past decade, researchers have used such techniques to pick apart topics that social scientists have chased for more than a century: from the psychological underpinnings of human morality, to the influence of misinformation, to the factors that make some artists more successful than others. One study uncovered widespread racism in algorithms that inform health-care decisions2; another used mobile-phone data to map impoverished regions in Rwanda3.
“The biggest achievement is a shift in thinking about digital behavioural data as an interesting and useful source”, says Markus Strohmaier, a computational social scientist at the GESIS Leibniz Institute for the Social Sciences in Cologne, Germany.
Not everyone has embraced that shift. Some social scientists are concerned that the computer scientists flooding into the field with ambitions as big as their data sets are not sufficiently familiar with previous research. Another complaint is that some computational researchers look only at patterns and do not consider the causes, or that they draw weighty conclusions from incomplete and messy data — often gained from social-media platforms and other sources that are lacking in data hygiene.
The barbs fly both ways. Some computational social scientists who hail from fields such as physics and engineering argue that many social-science theories are too nebulous or poorly defined to be tested.
This all amounts to “a power struggle within the social-science camp”, says Marc Keuschnigg, an analytical sociologist at Linköping University in Norrköping, Sweden. “Who in the end succeeds will claim the label of the social sciences.”
But the two camps are starting to merge. “The intersection of computational social science with traditional social science is growing,” says Keuschnigg, pointing to the boom in shared journals, conferences and study programmes. “The mutual respect is growing, also.”…(More)”.
Discussion Paper by Fabio Ricciato, Albrecht Wirthmann and Martina Hahn: “In this discussion paper, we outline the motivations and the main principles of the Trusted Smart Statistics (TSS) concept that is under development in the European Statistical System. TSS represents the evolution of official statistics in response to the challenges posed by the new datafied society. Taking stock from the availability of new digital data sources, new technologies, and new behaviors, statistical offices are called nowadays to rethink the way they operate in order to reassert their role in modern democratic society. The issue at stake is considerably broader and deeper than merely adapting existing processes to embrace so-called Big Data. In several aspects, such evolution entails a fundamental paradigm shift with respect to the legacy model of official statistics production based on traditional data sources, for example, in the relation between data and computation, between data collection and analysis, between methodological development and statistical production, and of course in the roles of the various stakeholders and their mutual relationships. Such complex evolution must be guided by a comprehensive system-level view based on clearly spelled design principles. In this paper, we aim at providing a general account of the TSS concept reflecting the current state of the discussion within the European Statistical System….(More)”
Paper by Laetitia Gauvin, Michele Tizzoni, Simone Piaggesi, Andrew Young, Natalia Adler, Stefaan Verhulst, Leo Ferres & Ciro Cattuto in Humanities and Social Sciences Communications: “Mobile phone data have been extensively used to study urban mobility. However, studies based on gender-disaggregated large-scale data are still lacking, limiting our understanding of gendered aspects of urban mobility and our ability to design policies for gender equality. Here we study urban mobility from a gendered perspective, combining commercial and open datasets for the city of Santiago, Chile.
We analyze call detail records for a large cohort of anonymized mobile phone users and reveal a gender gap in mobility: women visit fewer unique locations than men, and distribute their time less equally among such locations. Mapping this mobility gap over administrative divisions, we observe that a wider gap is associated with lower income and lack of public and private transportation options. Our results uncover a complex interplay between gendered mobility patterns, socio-economic factors and urban affordances, calling for further research and providing insights for policymakers and urban planners….(More)”.
Blog by Sally Kerr: “The COVID emergency has brought many challenges that were unimaginable a few months ago. The first priorities were safety and health, but when lockdown started one of the early issues was accessing and sharing local data to help everyone deal with and live through the emergency. Communities grappled with the scarcity of local data, finding it difficult to source for some services, food deliveries and goods. This was not a new issue, but the pandemic brought it into sharp relief.
Local data use covers a broad spectrum. People moving to a new area want information about the environment — schools, amenities, transport, crime rates and local health. For residents, continuing knowledge of business opening hours, events, local issues, council plans and roadworks remains important, not only for everyday living but to help understand issues and future plans that will change their environment. Really local data (hyperlocal data) is either fragmented or unavailable, making it difficult for local people to stay informed, whilst larger data sets about an area (e.g. population, school performance) are not always easy to understand or use. They sit in silos owned by different sectors, on disparate websites, usually collated for professional or research use.
Third sector organisations in a community will gather data relevant to their work such as contacts and event numbers but may not source wider data sets about the area, such as demographics, to improve their work. Using this data could strengthen future grant applications by validating their work. For Government or Health bodies carrying out place making community projects, there is a reliance on their own or national data sources supplemented with qualitative data snapshots. Their dependence on tried and tested sources is due to time and resource pressures but means there is no time to gather that rich seam of local data that profiles individual needs.
Imagine a future community where local data is collected and managed together for both official organisations and the community itself. Where there are shared aims and varied use. Current and relevant data would be accessible and easy to understand, provided in formats that suit the user — from data scientist to school child. A curated data hub would help citizens learn data skills and carry out collaborative projects on anything from air quality to local biodiversity, managing the data and offering increased insight and useful validation for wider decision making. Costs would be reduced with duplication and effort reduced….(More)”.