Does protest really work in cosy democracies?


Steve Crawshaw at LSE Impact Blog: “…If it is possible for peaceful crowds to force the collapse of the Berlin Wall or to unseat a Mubarak, how easy it should it be for protesters to persuade a democratically elected leader to retreat from “mere” bad policy? In truth, not easy at all. Two million marched in the UK against the Iraq War in 2003 – and it made not a blind bit of difference with Tony Blair’s determination to proceed with a war that the UN Secretary-General described as illegal. Blair was re-elected, two years later.

After the inauguration of Donald Trump in January 2017, millions took part in the series of Women’s Marches in the United States and around the world. It seemed – it was – a powerful defining moment. And yet, at least in the short-term, those remarkable protests were water off the presidential duck’s back. His response was mockery. In some respects, Trump could afford to mock. A man who has received 63 million votes is in a stronger position than the unelected leader who has to threaten or use violence to stay in power.

And yet.

One thing that protest in an authoritarian and a democratic context have in common is that the impact of protest – including delayed impact – remains uncertain, both for those who protest and those who are protested against.

Vaclav Havel argued that it was worth “living in truth” – speaking truth to power – even without any certainty of outcome. “Those that say individuals are not capable of changing anything are only looking for excuses.” In that context, what is perhaps most unacceptable is to mock those who take risks, and seek change. Lord Charles Powell, former adviser to Margaret Thatcher, for example explained to the umbrella protesters in Hong Kong in 2013 that they were foolish and naive. They should, he told them, learn to live with the “small black cloud” of anti-democratic pressures from Beijing. The protesters failed to heed Powell’s complacent message. In the words of Joshua Wong, on his way back to jail earlier in 2017: “You can lock up our bodies, but not our minds.”

Scepticism and failure are linked, as the Egyptian activist Asmaa Mahfouz made clear in a powerful video which helped trigger the uprising in 2011. The 26-year-old declared: ‘”Whoever says it is not worth it because there will only be a handful or people, I want to tell him, “You are the reason for this.” Sitting at home and just watching us on the news or Facebook leads to our humiliation.’ The video went viral. Millions went out. The rest was history.

Even in a democracy, that same it-can’t-be-done logic sucks us in more often, perhaps, than we realize….(More)”.

Better Data for Better Policy: Accessing New Data Sources for Statistics Through Data Collaboratives


Medium Blog by Stefaan Verhulst: “We live in an increasingly quantified world, one where data is driving key business decisions. Data is claimed to be the new competitive advantage. Yet, paradoxically, even as our reliance on data increases and the call for agile, data-driven policy making becomes more pronounced, many Statistical Offices are confronted with shrinking budgets and an increased demand to adjust their practices to a data age. If Statistical Offices fail to find new ways to deliver “evidence of tomorrow”, by leveraging new data sources, this could mean that public policy may be formed without access to the full range of available and relevant intelligence — as most business leaders have. At worst, a thinning evidence base and lack of rigorous data foundation could lead to errors and more “fake news,” with possibly harmful public policy implications.

While my talk was focused on the key ways data can inform and ultimately transform the full policy cycle (see full presentation here), a key premise I examined was the need to access, utilize and find insight in the vast reams of data and data expertise that exist in private hands through the creation of new kinds of public and private partnerships or “data collaboratives” to establish more agile and data-driven policy making.

Screen Shot 2017-10-20 at 5.18.23 AM

Applied to statistics, such approaches have already shown promise in a number of settings and countries. Eurostat itself has, for instance, experimented together with Statistics Belgium, with leveraging call detail records provided by Proximus to document population density. Statistics Netherlands (CBS) recently launched a Center for Big Data Statistics (CBDS)in partnership with companies like Dell-EMC and Microsoft. Other National Statistics Offices (NSOs) are considering using scanner data for monitoring consumer prices (Austria); leveraging smart meter data (Canada); or using telecom data for complementing transportation statistics (Belgium). We are now living undeniably in an era of data. Much of this data is held by private corporations. The key task is thus to find a way of utilizing this data for the greater public good.

Value Proposition — and Challenges

There are several reasons to believe that public policy making and official statistics could indeed benefit from access to privately collected and held data. Among the value propositions:

  • Using private data can increase the scope and breadth and thus insights offered by available evidence for policymakers;
  • Using private data can increase the quality and credibility of existing data sets (for instance, by complementing or validating them);
  • Private data can increase the timeliness and thus relevance of often-outdated information held by statistical agencies (social media streams, for example, can provide real-time insights into public behavior); and
  • Private data can lower costs and increase other efficiencies (for example, through more sophisticated analytical methods) for statistical organizations….(More)”.

Implementing Randomized Evaluations in Government


J-Pal Blog: “The J-PAL State and Local Innovation Initiative supports US state and local governments in using randomized evaluations to generate new and widely applicable lessons about the effectiveness of their programs and policies. Drawing upon the experience of the state and local governments selected to participate in the initiative to date, this guide provides practical guidance on how to identify good opportunities for randomized evaluations, how randomized evaluations can be feasibly embedded into the implementation of a program or policy, and how to overcome some of the common challenges in designing and carrying out randomized evaluations….(More)”.

Where’s the evidence? Obstacles to impact-gathering and how researchers might be better supported in future


Clare Wilkinson at the LSE Impact Blog: “…In a recent case study I explore how researchers from a broad range of research areas think about evidencing impact, what obstacles to impact-gathering might stand in their way, and how they might be further supported in future.

Unsurprisingly the research found myriad potential barriers to gathering research impact, such as uncertainty over how impact is defined, captured, judged, and weighted, or the challenges for researchers in tracing impact back to a specific time-period or individual piece of research. Many of these constraints have been recognised in previous research in this area – or were anticipated when impact was first discussed – but talking to researchers in 2015 about their impact experiences of the REF 2014 data-gathering period revealed a number of lingering concerns.

A further hazard identified by the case study is the inequalities in knowledge around research impact and how this knowledge often exists in siloes. Those researchers most likely to have obvious impact-generating activities were developing quite detailed and extensive experience of impact-capturing; while other researchers (including those at early-career stages) were less clear on the impact agenda’s relevance to them or even whether their research had featured in an impact case study. Encouragingly some researchers did seem to increase in confidence once having experience of authoring an impact case study, but sharing skills and confidence with the “next generation” of researchers likely to have impact remains a possible issue for those supporting impact evidence-gathering.

So, how can researchers, across the board, be supported to effectively evidence their impact? Most popular amongst the options given to the 70 or so researchers that participated in this case study were: 1) approaches that offered them more time or funding to gather evidence; 2) opportunities to see best-practice examples; 3) opportunities to learn more about what “impact” means; and 4) the sharing of information on the types of evidence that could be collected….(More)”.

Introducing the Digital Policy Model Canvas


Blog by Stefaan Verhulst: “…Yesterday, the National Digital Policy Network of the World Economic Forum, of which I am a member,  released a White Paper aimed at facilitating this process. The paper, entitled “Digital Policy Playbook 2017: Approaches to National Digital Governance,”  examines a number of case studies from around the world to develop a “playbook” that can help leaders in designing digital policies that maximize the forthcoming opportunities and effectively meet the challenges. It is the result of a series of extensive discussions and consultations held around the world, over , and attended by leading experts from various sectors and geographies…..

How can such insights be translated into a practical and pragmatic approach to policymaking? In order to find implementable solutions, we sought to develop a “Digital Policy Model Canvas” that would guide policy makers to derive specific policies and regulatory mechanisms in an agile and iterative manner – integrating both design thinking and evidence based policy making. This notion of a canvas is borrowed from the business world. For example, in Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers, Alexander Osterwalder and Yves Pigneur introduce the idea of a “Business Model Canvas” to generate new, innovative business models that can help companies–and others–go beyond legacy systems and approaches.

Applying this approach to the world of digital policymaking and innovation, we arrive at the “Digital Policy Model Canvas” represented in the accompanying figure.

Screen Shot 2017-09-22 at 6.08.24 AM

The design and implementation of such a canvas can be applied to a specific problem and/or geographic context, and would include the following steps…(More)”.

Open & Shut


Harsha Devulapalli: “Welcome to Open & Shut — a new blog dedicated to exploring the opportunities and challenges of working with open data in closed societies around the world. Although we’ll be exploring questions relevant to open data practitioners worldwide, we’re particularly interested in seeing how civil society groups and actors in the Global South are using open data to push for greater government transparency, and tackle daunting social and economic challenges facing their societies….Throughout this series we’ll be profiling and interviewing organisations working with open data worldwide, and providing do-it-yourself data tutorials that will be useful for beginners as well as data experts. …

What do we mean by the terms ‘open data’ and ‘closed societies’?

It’s important to be clear about what we’re dealing with, here. So let’s establish some key terms. When we talk about ‘open data’, we mean data that anyone can access, use and share freely. And when we say ‘closed societies’, we’re referring to states or regions in which the political and social environment is actively hostile to notions of openness and public scrutiny, and which hold principles of freedom of information in low esteem. In closed societies, data is either not published at all by the government, or else is only published in inaccessible formats, is missing data, is hard to find or else is just not digitised at all.

Iran is one such state that we would characterise as a ‘closed society’. At Small Media, we’ve had to confront the challenges of poor data practice, secrecy, and government opaqueness while undertaking work to support freedom of information and freedom of expression in the country. Based on these experiences, we’ve been working to build Iran Open Data — a civil society-led open data portal for Iran, in an effort to make Iranian government data more accessible and easier for researchers, journalists, and civil society actors to work with.

Iran Open Data — an open data portal for Iran, created by Small Media

.

..Open & Shut will shine a light on the exciting new ways that different groups are using data to question dominant narratives, transform public opinion, and bring about tangible change in closed societies. At the same time, it’ll demonstrate the challenges faced by open data advocates in opening up this valuable data. We intend to get the community talking about the need to build cross-border alliances in order to empower the open data movement, and to exchange knowledge and best practices despite the different needs and circumstances we all face….(More)

Where’s the ‘Civic’ in CivicTech?


Blog by Pius Enywaru: “The ideology of community participation and development is a crucial topic for any nation or community seeking to attain sustainable development. Here in Uganda, oftentimes when the opportunity for public participation either in local planning or in holding local politicians to account — the ‘don’t care’ attitude reigns….

What works?

Some of these tools include Ask Your Government Uganda, a platform built to help members of the public get the information they want about from 106 public agencies in Uganda. U-Report developed by UNICEF provides an SMS-based social monitoring tool designed to address issues affecting the youth of Uganda. Mentioned in a previous blog post, Parliament Watchbrings the proceedings of the Par­lia­ment of Uganda to the citizens. The or­ga­ni­za­tion lever­ages tech­nol­ogy to share live up­dates on so­cial me­dia and pro­vides in-depth analy­sis to cre­ate a bet­ter un­der­stand­ing on the busi­ness of Par­lia­ment. Other tools used include citizen scorecards, public media campaigns and public petitions. Just recently, we have had a few calls to action to get people to sign petitions, with somewhat lackluster results.

What doesn’t work?

Although the usage of these tools have dramatically grown, there is still a lack of awareness and consequently, community participation. In order to understand the interventions which the Government of Uganda believes are necessary for sustainable urban development, it is important to examine the realities pertaining to urban areas and their planning processes. There are many challenges in deploying community participation tools based on ICT such as limited funding and support for such initiatives, low literacy levels, low technical literacy, a large digital divide, low rates of seeking input from communities in developing these tools, lack of adequate government involvement and resistance/distrust of change by both government and citizens. Furthermore, in many of these initiatives, a large marketing or sensitization push is needed to let citizens know that these services exist for their benefit.

There are great minds who have brilliant ideas to try and bring literally everyone on board though civic engagement. When you have a look at their ideas, you will agree that indeed they might make a reputable service and bring about remarkable change in different communities. However, the biggest question has always been, “How do these ideas get executed and adopted by these communities that they target”? These ideas suffer a major setback of lack of inclusivity to enhance community participation. This still remains a puzzle for most folks that have these ideas….(More)”.

Why We Should Care About Bad Data


Blog by Stefaan G. Verhulst: “At a time of open and big data, data-led and evidence-based policy making has great potential to improve problem solving but will have limited, if not harmful, effects if the underlying components are riddled with bad data.

Why should we care about bad data? What do we mean by bad data? And what are the determining factors contributing to bad data that if understood and addressed could prevent or tackle bad data? These questions were the subject of my short presentation during a recent webinar on  Bad Data: The Hobgoblin of Effective Government, hosted by the American Society for Public Administration and moderated by Richard Greene (Partner, Barrett and Greene Inc.). Other panelists included Ben Ward (Manager, Information Technology Audits Unit, California State Auditor’s Office) and Katherine Barrett (Partner, Barrett and Greene Inc.). The webinar was a follow-up to the excellent Special Issue of Governing on Bad Data written by Richard and Katherine….(More)”

Formalised data citation practices would encourage more authors to make their data available for reuse


 Hyoungjoo Park and Dietmar Wolfram at the LSE Impact Blog: “Today’s researchers work in a heavily data-intensive and collaborative environment in order to further scientific discovery across and within fields. It is becoming routine for researchers (i.e. authors and data publishers) to submit their research data, such as datasets, biological samples in biomedical fields, and computer code, as supplementary information in order to comply with data sharing requirements of major funding agencies, high-profile journals, and data journals. This is part of open science, where data and any publication products are expected to be made available to anyone interested.

Given that researchers benefit from publicly shared data through data reuse in their own research, researchers who provide access to data should be acknowledged for their contributions, much in the same way that authors are recognised for their research publications through citation. Researchers who use shared data or other shared research products (e.g. open access software, tissue cultures) should also acknowledge the providers of these resources through formal citation. At present, data citation is not widely practised in most disciplines and as an object of study remains largely overlooked….

We found that data citations appear in the references section of an article less frequently than in the main text, making it difficult to identify the reward and credit for data authors (i.e. data sharers). Consistent data citation formats could not be found. Current data citation practices do not (yet) benefit data sharers. Also, data citation was sometimes located in the supplementary information, outside of the references. Data that had been reused was often not acknowledged in the reference lists, but was rather hidden in the representation of data (e.g. tables, figures, images, graphs, and other elements), which may be a consequence of the fact that data citation practices are not yet common in scholarly communications.

Ongoing challenges remain in identifying and documenting data citation. First, the practice of informal data citation presents a challenge for accurately documenting data citation. …

Second, data recitation by one or more co-authors of earlier studies (i.e. self-citation) is common, which reduces the broader impact of data sharing by limiting much of the reuse to the original authors..

Third, currently indexed data citations may not include rapidly advancing areas, such as in the hard sciences or computer engineering, because approximately 90% of indexed works were associated with journal articles…

Fourth, the number of authors associated with shared datasets raises questions of the ownership of and responsibility for a collective work, although some journals require one author to be responsible for the data used in the study…(More). (See also An examination of research data sharing and re-use: implications for data citation practice, published in Scientometrics)

Avoiding Garbage In – Garbage Out: Improving Administrative Data Quality for Research


Blog by : “In June, I presented the webinar, “Improving Administrative Data Quality for Research and Analysis”, for members of the Association of Public Data Users (APDU). APDU is a national network that provides a venue to promote education, share news, and advocate on behalf of public data users.

The webinar served as a primer to help smaller organizations begin to use their data for research. Participants were given the tools to transform their administrative data into “research-ready” datasets.

I first reviewed seven major issues for administrative data quality and discussed how these issues can affect research and analysis. For instance, issues with incorrect value formats, unit of analysis, and duplicate records can make the data difficult to use. Invalid or inconsistent values lead to inaccurate analysis results. Missing or outlier values can produce inaccurate and biased analysis results. All these issues make the data less useful for research.

Next, I presented concrete strategies for reviewing the data to identify each of these quality issues. I also discussed several tips to make the data review process easier, faster, and easy to replicate. Most importantly among these tips are: (1) reviewing everyvariable in the data set, whether you expect problems or not, and (2) relying on data documentation to understand how the data should look….(More)”.