Open Government: Concepts and Challenges for Public Administration’s Management in the Digital Era


Tippawan Lorsuwannarat in the Journal of Public and Private Management: “This paper has four main objectives. First, to disseminate a study on the meaning and development of open government. Second, to describe the components of an open government. Third, to examine the international movement situation involved with open government. And last, to analyze the challenges related to the application of open government in Thailandus current digital era. The paper suggests four periods of open government by linking to the concepts of public administration in accordance with the use of information technology in the public sector. The components of open government are consistent with the meaning of open government, including open data, open access, and open engagement. The current international situation of open government considers the ranking of open government and open government partnership. The challenges of adopting open government in Thailand include clear policy regarding open government, digital gap, public organizational culture, laws supporting privacy and data infrastructure….(More)”.

Research data infrastructures in the UK


The Open Research Data Task Force : “This report is intended to inform the work of the Open Research Data Task Force, which has been established with the aim of building on the principles set out in Open Research Data Concordat (published in July 2016) to co-ordinate creation of a roadmap to develop the infrastructure for open research data across the UK. As an initial contribution to that work, the report provides an outline of the policy and service infrastructure in the UK as it stands in the first half of 2017, including some comparisons with other countries; and it points to some key areas and issues which require attention. It does not seek to identify possible courses of action, nor even to suggest priorities the Task Force might consider in creating its final report to be published in 2018. That will be the focus of work for the Task Force over the next few months.

Why is this important?

The digital revolution continues to bring fundamental changes to all aspects of research: how it is conducted, the findings that are produced, and how they are interrogated and transmitted not only within the research community but more widely. We are as yet still in the early stages of a transformation in which progress is patchy across the research community, but which has already posed significant challenges for research funders and institutions, as well as for researchers themselves. Research data is at the heart of those challenges: not simply the datasets that provide the core of the evidence analysed in scholarly publications, but all the data created and collected throughout the research process. Such data represents a potentially-valuable resource for people and organisations in the commercial, public and voluntary sectors, as well as for researchers. Access to such data, and more general moves towards open science, are also critically-important in ensuring that research is reproducible, and thus in sustaining public confidence in the work of the research community. But effective use of research data depends on an infrastructure – of hardware, software and services, but also of policies, organisations and individuals operating at various levels – that is as yet far from fully-formed. The exponential increases in volumes of data being generated by researchers create in themselves new demands for storage and computing power. But since the data is characterised more by heterogeneity then by uniformity, development of the infrastructure to manage it involves a complex set of requirements in preparing, collecting, selecting, analysing, processing, storing and preserving that data throughout its life cycle.

Over the past decade and more, there have been many initiatives on the part of research institutions, funders, and members of the research community at local, national and international levels to address some of these issues. Diversity is a key feature of the landscape, in terms of institutional types and locations, funding regimes, and nature and scope of partnerships, as well as differences between disciplines and subject areas. Hence decision-makers at various levels have fostered via their policies and strategies many community-organised developments, as well as their own initiatives and services. Significant progress has been achieved as a result, through the enthusiasm and commitment of key organisations and individuals. The less positive features have been a relative lack of harmonisation or consolidation, and there is an increasing awareness of patchiness in provision, with gaps, overlaps and inconsistencies. This is not surprising, since policies, strategies and services relating to research data necessarily affect all aspects of support for the diverse processes of research itself. Developing new policies and infrastructure for research data implies significant re-thinking of structures and regimes for supporting, fostering and promoting research itself. That in turn implies taking full account of widely-varying characteristics and needs of research of different kinds, while also keeping in clear view the benefits to be gained from better management of research data, and from greater openness in making data accessible for others to re-use for a wide range of different purposes….(More)”.

The State of Mobile Data for Social Good


UN Global Pulse: “This report outlines the value of harnessing mobile data for social good and provides an analysis of the gaps. Its aim is to survey the landscape today, assess the current barriers to scale, and make recommendations for a way forward.

The report reviews the challenges the field is currently facing and discusses a range of issues preventing mobile data from being used for social good. These challenges come from both the demand and supply side of mobile data and from the lack of coordination among stakeholders. It continues by providing a set of recommendations intended to move beyond short-term and ad hoc projects to more systematic and institutionalized implementations that are scalable, replicable, sustainable and focused on impact.

Finally, the report proposes a roadmap for 2018 calling all stakeholders to work on developing a scalable and impactful demonstration project that will help to establish the value of mobile data for social good. The report includes examples of innovation projects and ways in which mobile data is already being used to inform development and humanitarian work. It is intended to inspire social impact organizations and mobile network operators (MNOs) to collaborate in the exploration and application of new data sources, methods and technologies….(More)”

AI and the Law: Setting the Stage


Urs Gasser: “Lawmakers and regulators need to look at AI not as a homogenous technology, but a set of techniques and methods that will be deployed in specific and increasingly diversified applications. There is currently no generally agreed-upon definition of AI. What is important to understand from a technical perspective is that AI is not a single, homogenous technology, but a rich set of subdisciplines, methods, and tools that bring together areas such as speech recognition, computer vision, machine translation, reasoning, attention and memory, robotics and control, etc. ….

Given the breadth and scope of application, AI-based technologies are expected to trigger a myriad of legal and regulatory issues not only at the intersections of data and algorithms, but also of infrastructures and humans. …

When considering (or anticipating) possible responses by the law vis-à-vis AI innovation, it might be helpful to differentiate between application-specific and cross-cutting legal and regulatory issues. …

Information asymmetries and high degrees of uncertainty pose particular difficulty to the design of appropriate legal and regulatory responses to AI innovations — and require learning systems. AI-based applications — which are typically perceived as “black boxes” — affect a significant number of people, yet there are nonetheless relatively few people who develop and understand AI-based technologies. ….Approaches such as regulation 2.0, which relies on dynamic, real-time, and data-driven accountability models, might provide interesting starting points.

The responses to a variety of legal and regulatory issues across different areas of distributed applications will likely result in a complex set of sector-specific norms, which are likely to vary across jurisdictions….

Law and regulation may constrain behavior yet also act as enablers and levelers — and are powerful tools as we aim for the development of AI for social good. …

Law is one important approach to the governance of AI-based technologies. But lawmakers and regulators have to consider the full potential of available instruments in the governance toolbox. ….

In a world of advanced AI technologies and new governance approaches towards them, the law, the rule of law, and human rights remain critical bodies of norms. …

As AI applies to the legal system itself, however, the rule of law might have to be re-imagined and the law re-coded in the longer run….(More).

Index: Collective Intelligence


By Hannah Pierce and Audrie Pirkl

The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on collective intelligence and was originally published in 2017.

The Collective Intelligence Universe

  • Amount of money that Reykjavik’s Better Neighbourhoods program has provided each year to crowdsourced citizen projects since 2012: € 2 million (Citizens Foundation)
  • Number of U.S. government challenges that people are currently participating in to submit their community solutions: 778 (Challenge.gov).
  • Percent of U.S. arts organizations used social media to crowdsource ideas in 2013, from programming decisions to seminar scheduling details: 52% (Pew Research)
  • Number of Wikipedia members who have contributed to a page in the last 30 days: over 120,000 (Wikipedia Page Statistics)
  • Number of languages that the multinational crowdsourced Letters for Black Lives has been translated into: 23 (Letters for Black Lives)
  • Number of comments in a Reddit thread that established a more comprehensive timeline of the theater shooting in Aurora than the media: 1272 (Reddit)
  • Number of physicians that are members of SERMO, a platform to crowdsource medical research: 800,000 (SERMO)
  • Number of citizen scientist projects registered on SciStarter: over 1,500 (Collective Intelligence 2017 Plenary Talk: Darlene Cavalier)
  • Entrants to NASA’s 2009 TopCoder Challenge: over 1,800 (NASA)

Infrastructure

  • Number of submissions for Block Holm (a digital platform that allows citizens to build “Minecraft” ideas on vacant city lots) within the first six months: over 10,000 (OpenLearn)
  • Number of people engaged to The Participatory Budgeting Project in the U.S.: over 300,000. (Participatory Budgeting Project)
  • Amount of money allocated to community projects through this initiative: $238,000,000

Health

  • Percentage of Internet-using adults with chronic health conditions that have gone online within the US to connect with others suffering from similar conditions: 23% (Pew Research)
  • Number of posts to Patient Opinion, a UK based platform for patients to provide anonymous feedback to healthcare providers: over 120,000 (Nesta)
    • Percent of NHS health trusts utilizing the posts to improve services in 2015: 90%
    • Stories posted per month: nearly 1,000 (The Guardian)
  • Number of tumors reported to the English National Cancer Registration each year: over 300,000 (Gov.UK)
  • Number of users of an open source artificial pancreas system: 310 (Collective Intelligence 2017 Plenary Talk: Dana Lewis)

Government

  • Number of submissions from 40 countries to the 2017 Open (Government) Contracting Innovation Challenge: 88 (The Open Data Institute)
  • Public-service complaints received each day via Indonesian digital platform Lapor!: over 500 (McKinsey & Company)
  • Number of registered users of Unicef Uganda’s weekly, SMS poll U-Report: 356,468 (U-Report)
  • Number of reports regarding government corruption in India submitted to IPaidaBribe since 2011: over 140,000 (IPaidaBribe)

Business

  • Reviews posted since Yelp’s creation in 2009: 121 million reviews (Statista)
  • Percent of Americans in 2016 who trust online customer reviews as much as personal recommendations: 84% (BrightLocal)
  • Number of companies and their subsidiaries mapped through the OpenCorporates platform: 60 million (Omidyar Network)

Crisis Response

Public Safety

  • Number of sexual harassment reports submitted to from 50 cities in India and Nepal to SafeCity, a crowdsourcing site and mobile app: over 4,000 (SafeCity)
  • Number of people that used Facebook’s Safety Check, a feature that is being used in a new disaster mapping project, in the first 24 hours after the terror attacks in Paris: 4.1 million (Facebook)

What Bhutanese hazelnuts tell us about using data for good


Bruno Sánchez-Andrade Nuño at WEForum: “How are we going to close the $2.5 trillion/year finance gap to achieve the Sustainable Development Goals (SDGs)? Whose money? What business model? How to scale it that much? If you read the recent development economics scholar literature, or Jim Kim’s new financing approach of the World Bank, you might hear the benefits of “blended finance” or “triple bottom lines.” I want to tell you instead about a real case that makes a dent. I want to tell you about Sonam.

Sonam is a 60-year old farmer in rural Bhutan. His children left for the capital, Thimphu, like many are doing nowadays. Four years ago, he decided to plant 2 acres of hazelnuts on an unused rocky piece of his land. Hazelnut saplings, training, and regular supervision all come from “Mountain Hazelnuts”, Bhutan’s only 100% foreign invested company. They fund the costs of the trees and helps him manage his orchard. In return, when the nuts come, he will sell his harvest to them above the guaranteed floor price, which will double his income; in a time when he will be too old to work in his rice field.

You could find similar impact stories for the roughly 10,000 farmers participating in this operation across the country, where the farmers are carefully selected to ensure productivity, maximize social and environmental benefits, such as vulnerable households, or reducing land erosion.

But Sonam also gets a visit from Kinzang every month. This is Kinzang’s first job. Otherwise, he would have moved to the city in hopes of finding a low paying job, but more likely joining the many unemployed youth from the countryside. Kinzang carefully records data on his smart-phone, talks to Sonam and digitally transmits the data back to the company HQ. There, if a problem is recorded with irrigation, pests, or there is any data anomaly, a team of experts (locally trained agronomists) will visit his orchard to figure out a solution.

The whole system of support, monitoring, and optimization live on a carefully crafted data platform that feeds information to and from the farmers, the monitors, the agronomist experts, and local government authorities. It ensures that all 10 million trees are healthy and productive, minimizes extra costs, tests and tracks effectiveness of new treatments….

This is also a story which demonstrates how “Data is the new oil” is not the right approach. If Data is the new oil, you extract value from the data, without much regard to feeding back value to the source of the data. However, in this system, “Data is the new soil.” Data creates a higher ground in which value flows back and forth. It lifts the source of the data -the farmers- into new income generation, it enables optimized operations; and it also helps the whole country: Much of the data (such as road quality used by the monitors) is made open for the benefit of the Bhutanese people, without contradiction or friction with the business model….(More)”.

A Road-Map To Transform The Secure And Accessible Use Of Data For High Impact Program Management, Policy Development, And Scholarship


Preface and Roadmap by Andrew Reamer and Julia Lane: “Throughout the United States, there is broadly emerging support to significantly enhance the nation’s capacity for evidence-based policymaking. This support is shared across the public and private sectors and all levels of geography. In recent years, efforts to enable evidence-based analysis have been authorized by the U.S. Congress, and funded by state and local governments, philanthropic foundations.

The potential exists for substantial change. There has been dramatic growth in technological capabilities to organize, link, and analyze massive volumes of data from multiple, disparate sources. A major resource is administrative data, which offer both advantages and challenges in comparison to data gathered through the surveys that have been the basis for much policymaking to date. To date, however, capability-building efforts have been largely “artisanal” in nature. As a result, the ecosystem of evidence-based policymaking capacity-building efforts is thin and weakly connected.

Each attempt to add a node to the system faces multiple barriers that require substantial time, effort, and luck to address. Those barriers are systemic. Too much attention is paid to the interests of researchers, rather than in the engagement of data producers. Individual projects serve focused needs and operate at a relative distance from one another Researchers, policymakers and funding agencies thus need exists to move from these artisanal efforts to new, generalized solutions that will catalyze the creation of a robust, large-scale data infrastructure for evidence-based policymaking.

This infrastructure will have be a “complex, adaptive ecosystem” that expands, regenerates, and replicates as needed while allowing customization and local control. To create a path for achieving this goal, the U.S. Partnership on Mobility from Poverty commissioned 12 papers and then hosted a day-long gathering (January 23, 2017) of over 60 experts to discuss findings and implications for action. Funded by the Gates Foundation, the papers and workshop panels were organized around three topics: privacy and confidentiality, data providers, and comprehensive strategies.

This issue of the Annals showcases those 12 papers which jointly propose solutions for catalyzing the development of a data infrastructure for evidence-based policymaking.

This preface:

  • places current evidence-based policymaking efforts in historical context
  • briefly describes the nature of multiple current efforts,
  • provides a conceptual framework for catalyzing the growth of any large institutional ecosystem,
  • identifies the major dimensions of the data infrastructure ecosystem,
  • describes key barriers to the expansion of that ecosystem, and
  • suggests a roadmap for catalyzing that expansion….(More)

(All 12 papers can be accessed here).

Open Data’s Effect on Food Security


Jeremy de Beer, Jeremiah Baarbé, and Sarah Thuswaldner at Open AIR: “Agricultural data is a vital resource in the effort to address food insecurity. This data is used across the food-production chain. For example, farmers rely on agricultural data to decide when to plant crops, scientists use data to conduct research on pests and design disease resistant plants, and governments make policy based on land use data. As the value of agricultural data is understood, there is a growing call for governments and firms to open their agricultural data.

Open data is data that anyone can access, use, or share. Open agricultural data has the potential to address food insecurity by making it easier for farmers and other stakeholders to access and use the data they need. Open data also builds trust and fosters collaboration among stakeholders that can lead to new discoveries to address the problems of feeding a growing population.

 

A network of partnerships is growing around agricultural data research. The Open African Innovation Research (Open AIR) network is researching open agricultural data in partnership with the Plant Phenotyping and Imaging Research Centre (P2IRC) and the Global Institute for Food Security (GIFS). This research builds on a partnership with the Global Open Data for Agriculture and Nutrition (GODAN) and they are exploring partnerships with Open Data for Development (OD4D) and other open data organizations.

…published two works on open agricultural data. Published in partnership with GODAN, “Ownership of Open Data” describes how intellectual property law defines ownership rights in data. Firms that collect data own the rights to data, which is a major factor in the power dynamics of open data. In July, Jeremiah Baarbé and Jeremy de Beer will be presenting “A Data Commons for Food Security” …The paper proposes a licensing model that allows farmers to benefit from the datasets to which they contribute. The license supports SME data collectors, who need sophisticated legal tools; contributors, who need engagement, privacy, control, and benefit sharing; and consumers who need open access….(More)“.

How did awful panel discussions become the default format?


 at The Guardian: “With the occasional exception, my mood in conferences usually swings between boredom, despair and rage. The turgid/self-aggrandizing keynotes and coma-inducing panels, followed by people (usually men) asking ‘questions’ that are really comments, and usually not on topic. The chairs who abdicate responsibility and let all the speakers over-run, so that the only genuinely productive bit of the day (networking at coffee breaks and lunch) gets squeezed. I end up dozing off, or furiously scribbling abuse in my notebook as a form of therapy, and hoping my neighbours can’t see what I’m writing. I probably look a bit unhinged…

This matters both because of the lost opportunity that badly run conferences represent, and because they cost money and time. I hope that if it was easy to fix, people would have done so already, but the fact is that the format is tired and unproductive.

For example, how did something as truly awful as panel discussions become the default format? They end up being a parade of people reading out papers, or they include terrible powerpoints crammed with too many words and illegible graphics. Can we try other formats, like speed dating (eg 10 people pitch their work for 2 minutes each, then each goes to a table and the audience hooks up (intellectually, I mean) with the ones they were interested in); world cafes; simulation games; joint tasks (eg come up with an infographic that explains X)? Anything, really. Yes ‘manels’ (male only panels – take the pledge here) are an outrage, but why not go for complete abolition, rather than mere gender balance?

Conferences frequently discuss evidence and results. So where is the evidence and results for the efficacy of conferences? Given the resources being ploughed into research on development (DFID alone spends about £350m a year), surely it would be a worthwhile investment, if it hasn’t already been done, to sponsor a research programme that runs multiple parallel experiments with different event formats, and compares the results in terms of participant feedback, how much people retain a month after the event etc? At the very least, can they find or commission a systematic review on what the existing evidence says?

Feedback systems could really help. A public eBay-type ratings system to rank speakers/conferences would provide nice examples of good practice for people to draw on (and bad practice to avoid). Or why not go real-time and encourage instant audience feedback? OK, maybe Occupy-style thumbs up from the audience if they like the speaker, thumbs down if they don’t would be a bit in-your-face for academe, but why not introduce a twitterwall to encourage the audience to interact with the speaker (perhaps with moderation to stop people testing the limits, as my LSE students did to Owen Barder last term)?

We need to get better at shaping the format to fit the the precise purpose of the conference. … if the best you can manage is ‘disseminating new research’ of ‘information sharing’, alarm bells should probably ring….(More)”.

Can we predict political uprisings?


 at The Conversation: “Forecasting political unrest is a challenging task, especially in this era of post-truth and opinion polls.

Several studies by economists such as Paul Collier and Anke Hoeffler in 1998 and 2002 describe how economic indicators, such as slow income growth and natural resource dependence, can explain political upheaval. More specifically, low per capita income has been a significant trigger of civil unrest.

Economists James Fearon and David Laitin have also followed this hypothesis, showing how specific factors played an important role in Chad, Sudan and Somalia in outbreaks of political violence.

According to the International Country Risk Guide index, the internal political stability of Sudan fell by 15% in 2014, compared to the previous year. This decrease was after a reduction of its per capita income growth rate from 12% in 2012 to 2% in 2013.

By contrast, when the income per capita growth increased in 1997 compared to 1996, the score for political stability in Sudan increased by more than 100% in 1998. Political stability across any given year seems to be a function of income growth in the previous one.

When economics lie

But as the World Bank admitted, “economic indicators failed to predict Arab Spring”.

Usual economic performance indicators, such as gross domestic product, trade, foreign direct investment, showed higher economic development and globalisation of the Arab Spring countries over a decade. Yet, in 2010, the region witnessed unprecedented uprisings that caused the collapse of regimes such as those in Tunisia, Egypt and Libya.

In our 2016 study we used data for more than 100 countries for the 1984–2012 period. We wanted to look at criteria other than economics to better understand the rise of political upheavals.

We found out and quantified how corruption is a destabilising factor when youth (15-24 years old) exceeds 20% of adult population.

Let’s examine the two main components of the study: demographics and corruption….

We are 90% confident that a youth bulge beyond 20% of adult population, on average, combined with high levels of corruption can significantly destabilise political systems within specific countries when other factors described above also taken into account. We are 99% confident about a youth bulge beyond 30% levels.

Our results can help explain the risk of internal conflict and the possible time window for it happening. They could guide policy makers and international organisations in allocating their anti-corruption budget better, taking into account the demographic structure of societies and the risk of political instability….(More).