Report edited by Misuraca, G., Barcevičius, E. and Codagnone, C.: “This report presents the final results of the research “Exploring Digital Government Transformation in the EU: understanding public sector innovation in a data-driven society”, in short DigiGov. After introducing the design and methodology of the study, the report provides a summary of the findings of the comprehensive analysis of the state of the art in the field, conducted reviewing a vast body of scientific literature, policy documents and practitioners generated reports in a broad range of disciplines and policy domains, with a focus on the EU. The scope and key dimensions underlying the development of the DigiGov-F conceptual framework are then presented. This is a theory-informed heuristic instrument to help mapping the effects of Digital Government Transformation and able to support defining change strategies within the institutional settings of public administration. Further, the report provides an overview of the findings of the empirical case studies conducted, and employing experimental or quasi-experimental components, to test and refine the conceptual framework proposed, while gathering evidence on impacts of Digital Government Transformation, through identifying real-life drivers and barriers in diverse Member States and policy domains. The report concludes outlining future research and policy recommendations, as well as depicting possible scenarios for future Digital Government Transformation, developed as a result of a dedicated foresight policy lab. This was conducted as part of the expert consultation and stakeholder engagement process that accompanied all the phases of the research implementation. Insights generated from the study also serve to pave the way for further empirical research and policy experimentation, and to contribute to the policy debate on how to shape Digital Europe at the horizon 2040….(More)”.
Research 4.0: research in the age of automation
Report by Rob Procter, Ben Glover, and Elliot Jones: “There is a growing consensus that we are at the start of a fourth industrial revolution, driven by developments in Artificial Intelligence, machine learning, robotics, the Internet of Things, 3-D printing, nanotechnology, biotechnology, 5G, new forms of energy storage and quantum computing. This report seeks to understand what impact AI is having on the UK’s research sector and what implications it has for its future, with a particular focus on academic research.
Building on our interim report, we find that AI is increasingly deployed in academic research in the UK in a broad range of disciplines. The combination of an explosion of new digital data sources with powerful new analytical tools represents a ‘double dividend’ for researchers. This is allowing researchers to investigate questions that would have been unanswerable just a decade ago. Whilst there has been considerable take-up of AI in academic research, the report highlights that steps could be taken to ensure even wider adoption of these new techniques and technologies, including wider training in the necessary skills for effective utilisation of AI, faster routes to culture change and greater multi-disciplinary collaboration.
This report recognises that the Covid-19 pandemic means universities are currently facing significant pressures, with considerable demands on their resources whilst simultaneously facing threats to income. But as we emerge from the current crisis, we urge policy makers and universities to consider the report’s recommendations and take steps to fortify the UK’s position as a place of world-leading research. Indeed, the current crisis has only reminded us of the critical importance of a highly functioning and flourishing research sector. The report recommends:
The current post-16 curriculum should be reviewed to ensure all pupils receive a grounding in basic digital, quantitative and ethical skills necessary to ensure the effective and appropriate utilisation of AI.A UK-wide audit of research computing and data infrastructure provision is conducted to consider how access might be levelled up.
UK Research and Innovation (UKRI) should consider incentivising institutions to utilise AI wherever it can offer benefits to the economy and society in their future spending on research and development.
Universities should take steps to ensure that it is easier for researchers to move between academia and industry, for example, by putting less emphasis on publications, and recognise other outputs and measures of achievement when hiring for academic posts….(More)”.
Emerging models of data governance in the age of datafication
Paper by Marina Micheli et al: “The article examines four models of data governance emerging in the current platform society. While major attention is currently given to the dominant model of corporate platforms collecting and economically exploiting massive amounts of personal data, other actors, such as small businesses, public bodies and civic society, take also part in data governance. The article sheds light on four models emerging from the practices of these actors: data sharing pools, data cooperatives, public data trusts and personal data sovereignty. We propose a social science-informed conceptualisation of data governance. Drawing from the notion of data infrastructure we identify the models as a function of the stakeholders’ roles, their interrelationships, articulations of value, and governance principles. Addressing the politics of data, we considered the actors’ competitive struggles for governing data. This conceptualisation brings to the forefront the power relations and multifaceted economic and social interactions within data governance models emerging in an environment mainly dominated by corporate actors. These models highlight that civic society and public bodies are key actors for democratising data governance and redistributing value produced through data. Through the discussion of the models, their underpinning principles and limitations, the article wishes to inform future investigations of socio-technical imaginaries for the governance of data, particularly now that the policy debate around data governance is very active in Europe….(More)”.
Politicians should take citizens’ assemblies seriously
The Economist: “In 403bc Athens decided to overhaul its institutions. A disastrous war with Sparta had shown that direct democracy, whereby adult male citizens voted on laws, was not enough to stop eloquent demagogues from getting what they wanted, and indeed from subverting democracy altogether. So a new body, chosen by lot, was set up to scrutinise the decisions of voters. It was called the nomothetai or “layers down of law” and it would be given the time to ponder difficult decisions, unmolested by silver-tongued orators and the schemes of ambitious politicians.
This ancient idea is back in vogue, and not before time. Around the world “citizens’ assemblies” and other deliberative groups are being created to consider questions that politicians have struggled to answer (see article). Over weeks or months, 100 or so citizens—picked at random, but with a view to creating a body reflective of the population as a whole in terms of gender, age, income and education—meet to discuss a divisive topic in a considered, careful way. Often they are paid for their time, to ensure that it is not just political wonks who sign up. At the end they present their recommendations to politicians. Before covid-19 these citizens met in conference centres in large cities where, by mingling over lunch-breaks, they discovered that the monsters who disagree with them turned out to be human after all. Now, as a result of the pandemic, they mostly gather on Zoom.
Citizens’ assemblies are often promoted as a way to reverse the decline in trust in democracy, which has been precipitous in most of the developed world over the past decade or so. Last year the majority of people polled in America, Britain, France and Australia—along with many other rich countries—felt that, regardless of which party wins an election, nothing really changes. Politicians, a common complaint runs, have no understanding of, or interest in, the lives and concerns of ordinary people.
Citizens’ assemblies can help remedy that. They are not a substitute for the everyday business of legislating, but a way to break the deadlock when politicians have tried to deal with important issues and failed. Ordinary people, it turns out, are quite reasonable. A large four-day deliberative experiment in America softened Republicans’ views on immigration; Democrats became less eager to raise the minimum wage. Even more strikingly, two 18-month-long citizens’ assemblies in Ireland showed that the country, despite its deep Catholic roots, was far more socially liberal than politicians had realised. Assemblies overwhelmingly recommended the legalisation of both same-sex marriage and abortion….(More)”.
The forecasting fallacy
Essay by Alex Murrell: “Marketers are prone to a prediction.
You’ll find them in the annual tirade of trend decks. In the PowerPoint projections of self-proclaimed prophets. In the feeds of forecasters and futurists. They crop up on every conference stage. They make their mark on every marketing magazine. And they work their way into every white paper.
To understand the extent of our forecasting fascination, I analysed the websites of three management consultancies looking for predictions with time frames ranging from 2025 to 2050. Whilst one prediction may be published multiple times, the size of the numbers still shocked me. Deloitte’s site makes 6904 predictions. McKinsey & Company make 4296. And Boston Consulting Group, 3679.
In total, these three companies’ websites include just shy of 15,000 predictions stretching out over the next 30 years.
But it doesn’t stop there.
My analysis finished in the year 2050 not because the predictions came to an end but because my enthusiasm did.
Search the sites and you’ll find forecasts stretching all the way to the year 2100. We’re still finding our feet in this century but some, it seems, already understand the next.
I believe the vast majority of these to be not forecasts but fantasies. Snake oil dressed up as science. Fiction masquerading as fact.
This article assesses how predictions have performed in five fields. It argues that poor projections have propagated throughout our society and proliferated throughout our industry. It argues that our fixation with forecasts is fundamentally flawed.
So instead of focussing on the future, let’s take a moment to look at the predictions of the past. Let’s see how our projections panned out….
Viewed through the lens of Tetlock, it becomes clear that the 15,000 predictions with which I began this article are not forecasts but fantasies.
The projections look precise. They sound scientific. But these forecasts are nothing more than delusions with decimal places. Snake oil dressed up as statistics. Fiction masquerading as fact. They provide a feeling of certainty but they deliver anything but.
In his 1998 book The Fortune Sellers, the business writer William A. Sherden quantified our consensual hallucination:
“Each year the prediction industry showers us with $200 billion in (mostly erroneous) information. The forecasting track records for all types of experts are universally poor, whether we consider scientifically oriented professionals, such as economists, demographers, meteorologists, and seismologists, or psychic and astrological forecasters whose names are household words.”
The comparison between professional predictors and fortune tellers is apt.
From tarot cards to tea leaves, palmistry to pyromancy, clear visions of cloudy futures have always been sold to susceptible audiences.
Today, marketers are one such audience.
It’s time we opened our eyes….(More)”.
How Tech Companies Can Advance Data Science for Social Good
Essay by Nick Martin: “As the world struggles to achieve the UN’s Sustainable Development Goals (SDGs), the need for reliable data to track our progress is more important than ever. Government, civil society, and private sector organizations all play a role in producing, sharing, and using this data, but their information-gathering and -analysis efforts have been able to shed light on only 68 percent of the SDG indicators so far, according to a 2019 UN study.
To help fill the gap, the data science for social good (DSSG) movement has for years been making datasets about important social issues—such as health care infrastructure, school enrollment, air quality, and business registrations—available to trusted organizations or the public. Large tech companies such as Facebook, Google, Amazon, and others have recently begun to embrace the DSSG movement. Spurred on by advances in the field, the Development Data Partnership, the World Economic Forum’s 2030Vision consortium, and Data Collaboratives, they’re offering information about social media users’ mobility during COVID-19, cloud computing infrastructure to help nonprofits analyze large datasets, and other important tools and services.
But sharing data resources doesn’t mean they’ll be used effectively, if at all, to advance social impact. High-impact results require recipients of data assistance to inhabit a robust, holistic data ecosystem that includes assets like policies for safely handling data and the skills to analyze it. As tech firms become increasingly involved with using data and data science to help achieve the SDGs, it’s important that they understand the possibilities and limitations of the nonprofits and other civil society organizations they’re working with. Without a firm grasp on the data ecosystems of their partners, all the technical wizardry in the world may be for naught.
Companies must ask questions such as: What incentives or disincentives are in place for nonprofits to experiment with data science in their work? What gaps remain between what nonprofits or data scientists need and the resources funders provide? What skills must be developed? To help find answers, TechChange, an organization dedicated to using technology for social good, partnered with Project17, Facebook’s partnerships-led initiative to accelerate progress on the SDGs. Over the past six months, the team led interviews with top figures in the DSSG community from industry, academia, and the public sector. The 14 experts shared numerous insights into using data and data science to advance social good and the SDGs. Four takeaways emerged from our conversations and research…(More)”.
Privacy in Pandemic: Law, Technology, and Public Health in the COVID-19 Crisis
Paper by Tiffany C. Li: “The COVID-19 pandemic has caused millions of deaths and disastrous consequences around the world, with lasting repercussions for every field of law, including privacy and technology. The unique characteristics of this pandemic have precipitated an increase in use of new technologies, including remote communications platforms, healthcare robots, and medical AI. Public and private actors are using new technologies, like heat sensing, and technologically-influenced programs, like contact tracing, alike in response, leading to a rise in government and corporate surveillance in sectors like healthcare, employment, education, and commerce. Advocates have raised the alarm for privacy and civil liberties violations, but the emergency nature of the pandemic has drowned out many concerns.
This Article is the first comprehensive account of privacy impacts related to technology and public health responses to the COVID-19 crisis. Many have written on the general need for better health privacy protections, education privacy protections, consumer privacy protections, and protections against government and corporate surveillance. However, this Article is the first comprehensive article to examine these problems of privacy and technology specifically in light of the pandemic, arguing that the lens of the pandemic exposes the need for both widescale and small-scale reform of privacy law. This Article approaches these problems with a focus on technical realities and social salience, and with a critical awareness of digital and political inequities, crafting normative recommendations with these concepts in mind.
Understanding privacy in this time of pandemic is critical for law and policymaking in the near future and for the long-term goals of creating a future society that protects both civil liberties and public health. It is also important to create a contemporary scholarly understanding of privacy in pandemic at this moment in time, as a matter of historical record. By examining privacy in pandemic, in the midst of pandemic, this Article seeks to create a holistic scholarly foundation for future work on privacy, technology, public health, and legal responses to global crises….(More)”
US Government Guide to Global Sharing of Personal Information
Book by IAPP: “The Guide to U.S. Government Practice on Global Sharing of Personal Information, Third Edition is a reference tool on U.S. government practice in G2G-sharing arrangements. The third edition contains new agreements, including the U.S.-U.K. Cloud Act Agreement, EU-U.S. Umbrella Agreement, United States-Mexico-Canada Agreement, and EU-U.S. Privacy Shield framework. This book examines those agreements as a way of establishing how practice has evolved. In addition to reviewing past agreements, international privacy principles of the Organization for Economic Co-operation and Development and Asian-Pacific Economic Cooperation will be reviewed for their relevance to G2G sharing. The guide is intended for lawyers, privacy professionals and individuals who wish to understand U.S. practice for sharing personal information across borders….(More)”.
AI Governance through Political Fora and Standards Developing Organizations
Report by Philippe Lorenz: “Shaping international norms around the ethics of Artificial Intelligence (AI) is perceived as a new responsibility by foreign policy makers. This responsibility is accompanied by a desire to play an active role in the most important international fora. Given the limited resources in terms of time and budget, foreign ministries need to set priorities for their involvement in the governance of AI. First and foremost, this requires an understanding of the entire AI governance landscape and the actors involved. The intention of this paper is to take a step back and familiarize foreign policy makers with the internal structures of the individual AI governance initiatives and the relationships between the involved actors. A basic understanding of the landscape also makes it easier to classify thematic developments and emerging actors, their agendas, and strategies.
This paper provides foreign policy practitioners with a mapping that can serve as a compass to navigate the complex web of stakeholders that shape the international debate on AI ethics. It plots political fora that serve as a platform for actors to agree upon ethical principles and pursue binding regulation. The mapping supplements the political purview with key actors who create technical standards on the ethics of AI. Furthermore, it describes the dynamic relationships between actors from these two domains. International governance addresses AI ethics through two different dimensions: political fora and Standards Developing Organizations (SDOs). Although it may be tempting to only engage on the diplomatic stage, this would be insufficient to help shape AI policy. Foreign policy makers must tend to both dimensions. While both governance worlds share the same topics and themes (in this case, AI ethics), they differ in their stakeholders, goals, outputs, and reach.
Key political and economic organizations such as the United Nations (UN), the Organisation for Economic Co-operation and Development (OECD), and the European Commission (EC) address ethical concerns raised by AI technologies. But so do SDOs such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the IEEE Standards Association (IEEE SA). Although actors from the latter category are typically concerned with the development of standards that address terminology, ontology, and technical benchmarks that facilitate product interoperability and market access, they, too, address AI ethics.
But these discussions on AI ethics will be useless if they do not inform the development of concrete policies for how to govern the technology.
At international political fora, on the one hand, states shape the outputs that are often limited to non-binding, soft AI principles. SDOs, on the other hand, tend to the private sector. They are characterized by consensus-based decision-making processes that facilitate the adoption of industry standards. These fora are generally not accessible to (foreign) policy makers. Either because they exclusively cater to private sector and bar policy makers from joining, or because active participation requires in-depth technical expertise as well as industry knowledge which may surpass diplomats’ skill sets. Nonetheless, as prominent standard setting bodies such as ISO, IEC, and IEEE SA pursue industry standards in AI ethics, foreign policy makers need to take notice, as this will likely have consequences for their negotiations at international political fora.
The precondition for active engagement is to gain an overview of the AI Governance environment. Foreign policy practitioners need to understand the landscape of stakeholders, identify key actors, and start to strategically engage with questions relevant to AI governance. This is necessary to determine whether a given initiative on AI ethics is aligned with one’s own foreign policy goals and, therefore, worth engaging with. It is also helpful to assess industry dynamics that might affect geo-economic deliberations. Lastly, all of this is vital information to report back to government headquarters to inform policy making, as AI policy is a matter of domestic and foreign policy….(More)”.
Statistics, lies and the virus: lessons from a pandemic
Tim Hartford at the Financial Times: “Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice.
The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again….
But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate…
As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. These tactics are powerful in part because they echo the ideals of science.
It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”. So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road.
The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors. Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn…(More)”.