The Wisdom of the Crowd: Promoting Media Development through Deliberative Initiatives


Report by Craig Matasick: “…innovative new set of citizen engagement practices—collectively known as deliberative democracy—offers important lessons that, when applied to the media development efforts, can help improve media assistance efforts and strengthen independent media environments around the world. At a time when disinformation runs rampant, it is more important than ever to strengthen public demand for credible information, reduce political polarization, and prevent media capture. Deliberative democracy approaches can help tackle these issues by expanding the number and diversity of voices that participate in policymaking, thereby fostering greater collective action and enhancing public support for media reform efforts.

Through a series of five illustrative case studies, the report demonstrates how deliberative democracy practices can be employed in both media development and democracy assistance efforts, particularly in the Global South. Such initiatives produce recommendations that take into account a plurality of voices while building trust between citizens and decision-makers by demonstrating to participants that their issues will be heard and addressed. Ultimately, this process can enable media development funders and practitioners to identify priorities and design locally relevant projects that have a higher likelihood for long-term impact.

– Deliberative democracy approaches, which are characterized by representative participation and moderated deliberation, provide a framework to generate demand-driven media development interventions while at the same time building greater public support for media reform efforts.

– Deliberative democracy initiatives foster collaboration across different segments of society, building trust in democratic institutions, combatting polarization, and avoiding elite capture.

– When employed by news organizations, deliberative approaches provide a better understanding of the issues their audiences care most about and uncover new problems affecting citizens that might not otherwise have come to light….(More)”.

Private Sector Data for Humanitarian Response: Closing the Gaps


Jos Berens at Bloomberg New Economy Forum: “…Despite these and other examples, data sharing between the private sector and humanitarian agencies is still limited. Out of 281 contributing organizations on HDX, only a handful come from the private sector. 

So why don’t we see more use of private sector data in humanitarian response? One obvious set of challenges concerns privacy, data protection and ethics. Companies and their customers are often wary of data being used in ways not related to the original purpose of data collection. Such concerns are understandable, especially given the potential legal and reputational consequences of personal data breaches and leaks.

Figuring out how to use this type of sensitive data in an already volatile setting seems problematic, and it is — negotiations between public and private partners in the middle of a crisis often get hung up on a lack of mutual understanding. Data sharing partnerships negotiated during emergencies often fail to mature beyond the design phase. This dynamic creates a loop of inaction due to a lack of urgency in between crises, followed by slow and halfway efforts when action is needed most.

To ensure that private sector data is accessible in an emergency, humanitarian organizations and private sector companies need to work together to build partnerships before a crisis. They can do this by taking the following actions: 

  • Invest in relationships and build trust. Both humanitarian organizations and private sector organizations should designate focal points who can quickly identify potentially useful data during a humanitarian emergency. A data stewards network which identifies and connects data responsibility leaders across organizations, as proposed by the NYU Govlab, is a great example of how such relations could look. Efforts to build trust with the general public regarding private sector data use for humanitarian response should also be strengthened, primarily through transparency about the means and purpose of such collaborations. This is particularly important in the context of COVID-19, as noted in the UN Comprehensive Response to COVID-19 and the World Economic Forum’s ‘Great Reset’ initiative…(More)”.

An Open-Source Tool to Accelerate Scientific Knowledge Discovery


Mozilla: “Timely and open access to novel outputs is key to scientific research. It allows scientists to reproduce, test, and build on one another’s work — and ultimately unlock progress.

The most recent example of this is the research into COVID-19. Much of the work was published in open access journals, swiftly reviewed and ultimately improving our understanding of how to slow the spread and treat the disease. Although this rapid increase in scientific publications is evident in other domains too, we might not be reaping the benefits. The tools to parse and combine this newly created knowledge have roughly remained the same for years.

Today, Mozilla Fellow Kostas Stathoulopoulos is launching Orion — an open-source tool to illuminate the science behind the science and accelerate knowledge discovery in the life sciences. Orion enables users to monitor progress in science, visually explore the scientific landscape, and search for relevant publications.

Orion

Orion collects, enriches and analyses scientific publications in the life sciences from Microsoft Academic Graph.

Users can leverage Orion’s views to interact with the data. The Exploration view shows all of the academic publications in a three-dimensional visualization. Every particle is a paper and the distance between them signifies their semantic similarity; the closer two particles are, the more semantically similar. The Metrics view visualizes indicators of scientific progress and how they have changed over time for countries and thematic topics. The Search view enables the users to search for publications by submitting either a keyword or a longer query, for example, a sentence or a paragraph of a blog they read online….(More)”.

Why Modeling the Spread of COVID-19 Is So Damn Hard



Matthew Hutson at IEEE Spectrum: “…Researchers say they’ve learned a lot of lessons modeling this pandemic, lessons that will carry over to the next.

The first set of lessons is all about data. Garbage in, garbage out, they say. Jarad Niemi, an associate professor of statistics at Iowa State University who helps run the forecast hub used by the CDC, says it’s not clear what we should be predicting. Infections, deaths, and hospitalization numbers each have problems, which affect their usefulness not only as inputs for the model but also as outputs. It’s hard to know the true number of infections when not everyone is tested. Deaths are easier to count, but they lag weeks behind infections. Hospitalization numbers have immense practical importance for planning, but not all hospitals release those figures. How useful is it to predict those numbers if you never have the true numbers for comparison? What we need, he said, is systematized random testing of the population, to provide clear statistics of both the number of people currently infected and the number of people who have antibodies against the virus, indicating recovery. Prakash, of Georgia Tech, says governments should collect and release data quickly in centralized locations. He also advocates for central repositories of policy decisions, so modelers can quickly see which areas are implementing which distancing measures.

Researchers also talked about the need for a diversity of models. At the most basic level, averaging an ensemble of forecasts improves reliability. More important, each type of model has its own uses—and pitfalls. An SEIR model is a relatively simple tool for making long-term forecasts, but the devil is in the details of its parameters: How do you set those to match real-world conditions now and into the future? Get them wrong and the model can head off into fantasyland. Data-driven models can make accurate short-term forecasts, and machine learning may be good for predicting complicated factors. But will the inscrutable computations of, for instance, a neural network remain reliable when conditions change? Agent-based models look ideal for simulating possible interventions to guide policy, but they’re a lot of work to build and tricky to calibrate.

Finally, researchers emphasize the need for agility. Niemi of Iowa State says software packages have made it easier to build models quickly, and the code-sharing site GitHub lets people share and compare their models. COVID-19 is giving modelers a chance to try out all their newest tools, says Meyers, of the University of Texas. “The pace of innovation, the pace of development, is unlike ever before,” she says. “There are new statistical methods, new kinds of data, new model structures.”…(More)”.

Public Sector Tech: New tools for the new normal


Special issue by ZDNet exploring “how new technologies like AI, cloud, drones, and 5G are helping government agencies, public organizations, and private companies respond to the events of today and tomorrow…:

Exploring Digital Government Transformation in the EU – Understanding public sector innovation in a data-driven society


Report edited by Misuraca, G., Barcevičius, E. and Codagnone, C.: “This report presents the final results of the research “Exploring Digital Government Transformation in the EU: understanding public sector innovation in a data-driven society”, in short DigiGov. After introducing the design and methodology of the study, the report provides a summary of the findings of the comprehensive analysis of the state of the art in the field, conducted reviewing a vast body of scientific literature, policy documents and practitioners generated reports in a broad range of disciplines and policy domains, with a focus on the EU. The scope and key dimensions underlying the development of the DigiGov-F conceptual framework are then presented. This is a theory-informed heuristic instrument to help mapping the effects of Digital Government Transformation and able to support defining change strategies within the institutional settings of public administration. Further, the report provides an overview of the findings of the empirical case studies conducted, and employing experimental or quasi-experimental components, to test and refine the conceptual framework proposed, while gathering evidence on impacts of Digital Government Transformation, through identifying real-life drivers and barriers in diverse Member States and policy domains. The report concludes outlining future research and policy recommendations, as well as depicting possible scenarios for future Digital Government Transformation, developed as a result of a dedicated foresight policy lab. This was conducted as part of the expert consultation and stakeholder engagement process that accompanied all the phases of the research implementation. Insights generated from the study also serve to pave the way for further empirical research and policy experimentation, and to contribute to the policy debate on how to shape Digital Europe at the horizon 2040….(More)”.

Research 4.0: research in the age of automation


Report by Rob Procter, Ben Glover, and Elliot Jones: “There is a growing consensus that we are at the start of a fourth industrial revolution, driven by developments in Artificial Intelligence, machine learning, robotics, the Internet of Things, 3-D printing, nanotechnology, biotechnology, 5G, new forms of energy storage and quantum computing. This report seeks to understand what impact AI is having on the UK’s research sector and what implications it has for its future, with a particular focus on academic research.

Building on our interim report, we find that AI is increasingly deployed in academic research in the UK in a broad range of disciplines. The combination of an explosion of new digital data sources with powerful new analytical tools represents a ‘double dividend’ for researchers. This is allowing researchers to investigate questions that would have been unanswerable just a decade ago. Whilst there has been considerable take-up of AI in academic research, the report highlights that steps could be taken to ensure even wider adoption of these new techniques and technologies, including wider training in the necessary skills for effective utilisation of AI, faster routes to culture change and greater multi-disciplinary collaboration.

This report recognises that the Covid-19 pandemic means universities are currently facing significant pressures, with considerable demands on their resources whilst simultaneously facing threats to income. But as we emerge from the current crisis, we urge policy makers and universities to consider the report’s recommendations and take steps to fortify the UK’s position as a place of world-leading research. Indeed, the current crisis has only reminded us of the critical importance of a highly functioning and flourishing research sector. The report recommends:

The current post-16 curriculum should be reviewed to ensure all pupils receive a grounding in basic digital, quantitative and ethical skills necessary to ensure the effective and appropriate utilisation of AI.A UK-wide audit of research computing and data infrastructure provision is conducted to consider how access might be levelled up.

UK Research and Innovation (UKRI) should consider incentivising institutions to utilise AI wherever it can offer benefits to the economy and society in their future spending on research and development.

Universities should take steps to ensure that it is easier for researchers to move between academia and industry, for example, by putting less emphasis on publications, and recognise other outputs and measures of achievement when hiring for academic posts….(More)”.

Emerging models of data governance in the age of datafication


Paper by Marina Micheli et al: “The article examines four models of data governance emerging in the current platform society. While major attention is currently given to the dominant model of corporate platforms collecting and economically exploiting massive amounts of personal data, other actors, such as small businesses, public bodies and civic society, take also part in data governance. The article sheds light on four models emerging from the practices of these actors: data sharing pools, data cooperatives, public data trusts and personal data sovereignty. We propose a social science-informed conceptualisation of data governance. Drawing from the notion of data infrastructure we identify the models as a function of the stakeholders’ roles, their interrelationships, articulations of value, and governance principles. Addressing the politics of data, we considered the actors’ competitive struggles for governing data. This conceptualisation brings to the forefront the power relations and multifaceted economic and social interactions within data governance models emerging in an environment mainly dominated by corporate actors. These models highlight that civic society and public bodies are key actors for democratising data governance and redistributing value produced through data. Through the discussion of the models, their underpinning principles and limitations, the article wishes to inform future investigations of socio-technical imaginaries for the governance of data, particularly now that the policy debate around data governance is very active in Europe….(More)”.

Politicians should take citizens’ assemblies seriously


The Economist: “In 403bc Athens decided to overhaul its institutions. A disastrous war with Sparta had shown that direct democracy, whereby adult male citizens voted on laws, was not enough to stop eloquent demagogues from getting what they wanted, and indeed from subverting democracy altogether. So a new body, chosen by lot, was set up to scrutinise the decisions of voters. It was called the nomothetai or “layers down of law” and it would be given the time to ponder difficult decisions, unmolested by silver-tongued orators and the schemes of ambitious politicians.

This ancient idea is back in vogue, and not before time. Around the world “citizens’ assemblies” and other deliberative groups are being created to consider questions that politicians have struggled to answer (see article). Over weeks or months, 100 or so citizens—picked at random, but with a view to creating a body reflective of the population as a whole in terms of gender, age, income and education—meet to discuss a divisive topic in a considered, careful way. Often they are paid for their time, to ensure that it is not just political wonks who sign up. At the end they present their recommendations to politicians. Before covid-19 these citizens met in conference centres in large cities where, by mingling over lunch-breaks, they discovered that the monsters who disagree with them turned out to be human after all. Now, as a result of the pandemic, they mostly gather on Zoom.

Citizens’ assemblies are often promoted as a way to reverse the decline in trust in democracy, which has been precipitous in most of the developed world over the past decade or so. Last year the majority of people polled in America, Britain, France and Australia—along with many other rich countries—felt that, regardless of which party wins an election, nothing really changes. Politicians, a common complaint runs, have no understanding of, or interest in, the lives and concerns of ordinary people.

Citizens’ assemblies can help remedy that. They are not a substitute for the everyday business of legislating, but a way to break the deadlock when politicians have tried to deal with important issues and failed. Ordinary people, it turns out, are quite reasonable. A large four-day deliberative experiment in America softened Republicans’ views on immigration; Democrats became less eager to raise the minimum wage. Even more strikingly, two 18-month-long citizens’ assemblies in Ireland showed that the country, despite its deep Catholic roots, was far more socially liberal than politicians had realised. Assemblies overwhelmingly recommended the legalisation of both same-sex marriage and abortion….(More)”.

The forecasting fallacy


Essay by Alex Murrell: “Marketers are prone to a prediction.

You’ll find them in the annual tirade of trend decks. In the PowerPoint projections of self-proclaimed prophets. In the feeds of forecasters and futurists. They crop up on every conference stage. They make their mark on every marketing magazine. And they work their way into every white paper.

To understand the extent of our forecasting fascination, I analysed the websites of three management consultancies looking for predictions with time frames ranging from 2025 to 2050. Whilst one prediction may be published multiple times, the size of the numbers still shocked me. Deloitte’s site makes 6904 predictions. McKinsey & Company make 4296. And Boston Consulting Group, 3679.

In total, these three companies’ websites include just shy of 15,000 predictions stretching out over the next 30 years.

But it doesn’t stop there.

My analysis finished in the year 2050 not because the predictions came to an end but because my enthusiasm did.

Search the sites and you’ll find forecasts stretching all the way to the year 2100. We’re still finding our feet in this century but some, it seems, already understand the next.

I believe the vast majority of these to be not forecasts but fantasies. Snake oil dressed up as science. Fiction masquerading as fact.

This article assesses how predictions have performed in five fields. It argues that poor projections have propagated throughout our society and proliferated throughout our industry. It argues that our fixation with forecasts is fundamentally flawed.

So instead of focussing on the future, let’s take a moment to look at the predictions of the past. Let’s see how our projections panned out….

Viewed through the lens of Tetlock, it becomes clear that the 15,000 predictions with which I began this article are not forecasts but fantasies.

The projections look precise. They sound scientific. But these forecasts are nothing more than delusions with decimal places. Snake oil dressed up as statistics. Fiction masquerading as fact. They provide a feeling of certainty but they deliver anything but.

In his 1998 book The Fortune Sellers, the business writer William A. Sherden quantified our consensual hallucination: 

“Each year the prediction industry showers us with $200 billion in (mostly erroneous) information. The forecasting track records for all types of experts are universally poor, whether we consider scientifically oriented professionals, such as economists, demographers, meteorologists, and seismologists, or psychic and astrological forecasters whose names are household words.” 

The comparison between professional predictors and fortune tellers is apt.

From tarot cards to tea leaves, palmistry to pyromancy, clear visions of cloudy futures have always been sold to susceptible audiences. 

Today, marketers are one such audience.

It’s time we opened our eyes….(More)”.