Impact of open government: Mapping the research landscape


Stephen Davenport at OGP Blog: “Government reformers and development practitioners in the open government space are experiencing the heady times associated with a newly-defined agenda. The opportunity for innovation and positive change can at times feel boundless. Yet, working in a nascent field also means a relative lack of “proven” tools and solutions (to such extent as they ever exist in development).

More research on the potential for open government initiatives to improve lives is well underway. However, keeping up with the rapidly evolving landscape of ongoing research, emerging hypotheses, and high-priority knowledge gaps has been a challenge, even as investment in open government activities has accelerated. This becomes increasing important as we gather to talk progress at the OGP Africa Regional Meeting 2016(link is external) and GIFT(link is external) consultations in Cape Town next week (May 4-6) .

Who’s doing what?
To advance the state of play, a new report commissioned by the World Bank, “Open Government Impact and Outcomes: Mapping the Landscape of Ongoing Research”(link is external), categorizes and takes stock of existing research. The report represents the first output of a newly-formed consortium (link is external) that aims to generate practical, evidence-based guidance for open government stakeholders, building on and complementing the work of organizations across the academic-practitioner spectrum.

The mapping exercise led to the creation of an interactive platform (link is external) with detailed information on how to find out more about each of the research projects covered, organized by a new typology for open government interventions. The inventory is limited in scope given practical and other considerations. It includes only projects that are currently underway. It is meant to be a forward-looking overview, rather than a literature review–and are relatively large and international in nature.

Charting a course: How can the World Bank add value?
The scope for increasing the open government knowledge base remains vast. The report suggests that, given its role as a lender, convener, and a policy advisor the World Bank is well positioned to complement and support existing research in a number of ways, such as:

  • Taking a demand-driven approach, focusing on specific areas where it can identify lessons for stakeholders seeking to turn open government enthusiasm into tangible results.
  • Linking researchers with governments and practitioners to study specific areas of interest (in particular, access to information and social accountability interventions).
  • Evaluating the impact of open government reforms against baseline data that may not be public yet, but that are accessible to the World Bank.
  • Contributing to a better understanding of the role and impact of ICTs through work like the recently-published study (link is external)that examines the relationship between digital citizen engagement and government responsiveness.
  • Ensuring that World Bank loans and projects are conceived as opportunities for knowledge generation, while incorporating the most relevant and up-to-date evidence on what works in different contexts.
  • Leveraging its involvement in the Open Government Partnership to help stakeholders make evidence-based reform commitments….(More)

Data innovation: where to start? With the road less taken


Giulio Quaggiotto at Nesta: “Over the past decade we’ve seen an explosion in the amount of data we create, with more being captured about our lives than ever before. As an industry, the public sector creates an enormous amount of information – from census data to tax data to health data. When it comes to use of the data however, despite many initiatives trying to promote open and big data for public policy as well as evidence-based policymaking, we feel there is still a long way to go.

Why is that? Data initiatives are often created under the assumption that if data is available, people (whether citizens or governments) will use it. But this hasn’t necessarily proven to be the case, and this approach neglects analysis of power and an understanding of the political dynamics at play around data (particularly when data is seen as an output rather than input).

Many data activities are also informed by the ‘extractive industry’ paradigm: citizens and frontline workers are seen as passive ‘data producers’ who hand over their information for it to be analysed and mined behind closed doors by ‘the experts’.

Given budget constraints facing many local and central governments, even well intentioned initiatives often take an incremental, passive transparency approach (i.e. let’s open the data first then see what happens), or they adopt a ‘supply/demand’ metaphor to data provision and usage…..

As a response to these issues, this blog series will explore the hypothesis that putting the question of citizen and government agency – rather than openness, volume or availability – at the centre of data initiatives has the potential to unleash greater, potentially more disruptive innovation and to focus efforts (ultimately leading to cost savings).

Our argument will be that data innovation initiatives should be informed by the principles that:

  • People closer to the problem are the best positioned to provide additional context to the data and potentially act on solutions (hence the importance of “thick data“).

  • Citizens are active agents rather than passive providers of ‘digital traces’.

  • Governments are both users and providers of data.

  • We should ask at every step of the way how can we empower communities and frontline workers to take better decisions over time, and how can we use data to enhance the decision making of every actor in the system (from government to the private sector, from private citizens to social enterprises) in their role of changing things for the better… (More)

 

7 projects that state and local governments can reuse


Melody Kramer at 18F: “We’re starting to see state and local governments adapt or use 18F products or tools. Nothing could make us happier; all of our code (and content) is available for anyone to use and reusable.

There are a number of open source projects that 18F has worked on that could work particularly well at any level of government. We’re highlighting seven below:

Public website analytics

A screen shot of the City of Boulder's analytics dashboard

We worked with the Digital Analytics Program, the U.S. Digital Service (USDS), and the White House to build and host a dashboard showing real-time U.S. federal government web traffic. This helps staff and the public learn about how people use government websites. The dashboard itself is open source and can be adapted for a state or local government. We recently interviewed folks from Philadelphia, Boulder, and the state of Tennessee about how they’ve adapted the analytics dashboard for their own use.

Quick mini-sites for content

A screen shot of an 18F guide on the pages platform

We built a responsive, accessible website template (based on open source work by the Consumer Financial Protection Bureau) that we use primarily for documentation and guides. You can take the website template, adapt the colors and fonts to reflect your own style template, and have an easy way to release notes about a project. We’ve used this template to write a guide on accessibility in government, content guidelines, and a checklist for what needs to take place before we release software. You’re also welcome to take our content and adapt it for your own needs — what we write is in the public domain.

Insight into how people interact with government

People depend on others (for example, family members, friends, and public library staff) for help with government websites, but government services are not set up to support this type of assistance.

Over the last several months, staff from General Service Administration’s USAGov and 18F teams have been talking to Americans around the country about their interactions with the federal government. The goal of the research was to identify and create cross-agency services and resources to improve how the government interacts with the public. Earlier this month, we published all of our research. You can read the full report with findings or explore what we learned on the 18F blog.

Market research for procurement

We developed a tool that helps you easily conduct market research across a number of categories for acquiring professional labor. You can read about how the city of Boston is using the tool to conduct market research.

Vocabulary for user-centered design

We released a deck of method cards that help research and design teams communicate a shared vocabulary across teams and agencies.

Task management

We recently developed a checklist program that help users manage complex to-do lists. One feature: checklist items deadlines can be set according to a fixed date or relative to completion of other items. This means you can create checklist for all new employees, for example, and say “Task five should be completed four days after task four,” whenever task four is completed by an employee.

Help small businesses find opportunities

FBOpen is a set of open source tools to help small businesses search for opportunities to work with the U.S. government. FBOpen presents an Application Programming Interface (API) to published Federal contracting opportunities, as well as implementing a beautiful graphical user interface to the same opportunities.

Anyone who wishes to may reuse this code to create their own website, free of charge and unencumbered by obligations….(More)”

Why our peer review system is a toothless watchdog


Ivan Oransky and Adam Marcus at StatNews: “While some — namely, journal editors and publishers — would like us to consider it the opposable thumb of scientific publishing, the key to differentiating rigor from rubbish, some of those very same people seem to think it’s good for nothing. Here is a partial list of the things that editors, publishers, and others have told the world peer review is not designed to do:

1. Detect irresponsible practices

Don’t expect peer reviewers to figure out if authors are “using public data as if it were the author’s own, submitting papers with the same content to different journals, or submitting an article that has already been published in another language without reference to the original,” said the InterAcademy Partnership, a consortium of national scientific academies.

2. Detect fraud

“Journal editors will tell you that peer review is not designed to detect fraud — clever misinformation will sail right through no matter how scrupulous the reviews,” Dan Engber wrote in Slate in 2005.

3. Pick up plagiarism

Peer review “is not designed to pick up fraud or plagiarism, so unless those are really egregious it usually doesn’t,” according to the Rett Syndrome Research Trust.

4. Spot ethics issues

“It is not the role of the reviewer to spot ethics issues in papers,” said Jaap van Harten, executive publisher of Elsevier (the world’s largest academic imprint)in a recent interview. “It is the responsibility of the author to abide by the publishing ethics rules. Let’s look at it in a different way: If a person steals a pair of shoes from a shop, is this the fault of the shop for not protecting their goods or the shoplifter for stealing them? Of course the fault lies with the shoplifter who carried out the crime in the first place.”

5. Spot statistical flaccidity

“Peer reviewers do not check all the datasets, rerun calculations of p-values, and so forth, except in the cases where statistical reviewers are involved — and even in these cases, statistical reviewers often check the methodologies used, sample some data, and move on.” So wrote Kent Anderson, who has served as a publishing exec at several top journals, including Science and the New England Journal of Medicine, in a recent blog post.

6. Prevent really bad research from seeing the light of day

Again, Kent Anderson: “Even the most rigorous peer review at a journal cannot stop a study from being published somewhere. Peer reviewers can’t stop an author from self-promoting a published work later.”

But …

Even when you lower expectations for peer review, it appears to come up short. Richard Smith, former editor of the BMJ, reviewed research showing that the system may be worse than no review at all, at least in biomedicine. “Peer review is supposed to be the quality assurance system for science, weeding out the scientifically unreliable and reassuring readers of journals that they can trust what they are reading,” Smith wrote. “In reality, however, it is ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.”

So … what’s left? And are whatever scraps that remain worth the veneration peer review receives? Don’t write about anything that isn’t peer-reviewed, editors frequently admonish us journalists, even creating rules that make researchers afraid to talk to reporters before they’ve published. There’s a good chance it will turn out to be wrong. Oh? Greater than 50 percent? Because that’s the risk of preclinical research in biomedicine being wrong after it’s been peer-reviewed.

With friends like these, who needs peer review? In fact, we do need it, but not just only in the black box that happens before publication. We need continual scrutiny of findings, at sites such as PubMed Commons and PubPeer, in what is known as post-publication peer review. That’s where the action is, and where the scientific record actually gets corrected….(More)”

Selected Readings on Data and Humanitarian Response


By Prianka Srinivasan and Stefaan G. Verhulst *

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data and humanitarian response was originally published in 2016.

Data, when used well in a trusted manner, allows humanitarian organizations to innovate how to respond to emergency events, including better coordination of post-disaster relief efforts, the ability to harness local knowledge to create more targeted relief strategies, and tools to predict and monitor disasters in real time. Consequently, in recent years both multinational groups and community-based advocates have begun to integrate data collection and evaluation strategies into their humanitarian operations, to better and more quickly respond to emergencies. However, this movement poses a number of challenges. Compared to the private sector, humanitarian organizations are often less equipped to successfully analyze and manage big data, which pose a number of risks related to the security of victims’ data. Furthermore, complex power dynamics which exist within humanitarian spaces may be further exacerbated through the introduction of new technologies and big data collection mechanisms. In the below we share:

  • Selected Reading List (summaries and hyperlinks)
  • Annotated Selected Reading List
  • Additional Readings

Selected Reading List  (summaries in alphabetical order)

Data and Humanitarian Response

Risks of Using Big Data in Humanitarian Context

Annotated Selected Reading List (in alphabetical order)

Karlsrud, John. “Peacekeeping 4.0: Harnessing the Potential of Big Data, Social Media, and Cyber Technologies.” Cyberspace and International Relations, 2013. http://bit.ly/235Qb3e

  • This chapter from the book “Cyberspace and International Relations” suggests that advances in big data give humanitarian organizations unprecedented opportunities to prevent and mitigate natural disasters and humanitarian crises. However, the sheer amount of unstructured data necessitates effective “data mining” strategies for multinational organizations to make the most use of this data.
  • By profiling some civil-society organizations who use big data in their peacekeeping efforts, Karlsrud suggests that these community-focused initiatives are leading the movement toward analyzing and using big data in countries vulnerable to crisis.
  • The chapter concludes by offering ten recommendations to UN peacekeeping forces to best realize the potential of big data and new technology in supporting their operations.

Mancini, Fancesco. “New Technology and the prevention of Violence and Conflict.” International Peace Institute, 2013. http://bit.ly/1ltLfNV

  • This report from the International Peace Institute looks at five case studies to assess how information and communications technologies (ICTs) can help prevent humanitarian conflicts and violence. Their findings suggest that context has a significant impact on the ability for these ICTs for conflict prevention, and any strategies must take into account the specific contingencies of the region to be successful.
  • The report suggests seven lessons gleaned from the five case studies:
    • New technologies are just one in a variety of tools to combat violence. Consequently, organizations must investigate a variety of complementary strategies to prevent conflicts, and not simply rely on ICTs.
    • Not every community or social group will have the same relationship to technology, and their ability to adopt new technologies are similarly influenced by their context. Therefore, a detailed needs assessment must take place before any new technologies are implemented.
    • New technologies may be co-opted by violent groups seeking to maintain conflict in the region. Consequently, humanitarian groups must be sensitive to existing political actors and be aware of possible negative consequences these new technologies may spark.
    • Local input is integral to support conflict prevention measures, and there exists need for collaboration and awareness-raising with communities to ensure new technologies are sustainable and effective.
    • Information shared between civil-society has more potential to develop early-warning systems. This horizontal distribution of information can also allow communities to maintain the accountability of local leaders.

Meier, Patrick. “Digital humanitarians: how big data is changing the face of humanitarian response.” Crc Press, 2015. http://amzn.to/1RQ4ozc

  • This book traces the emergence of “Digital Humanitarians”—people who harness new digital tools and technologies to support humanitarian action. Meier suggests that this has created a “nervous system” to connect people from disparate parts of the world, revolutionizing the way we respond to humanitarian crises.
  • Meier argues that such technology is reconfiguring the structure of the humanitarian space, where victims are not simply passive recipients of aid but can contribute with other global citizens. This in turn makes us more humane and engaged people.

Robertson, Andrew and Olson, Steve. “Using Data Sharing to Improve Coordination in Peacebuilding.” United States Institute for Peace, 2012. http://bit.ly/235QuLm

  • This report functions as an overview of a roundtable workshop on Technology, Science and Peace Building held at the United States Institute of Peace. The workshop aimed to investigate how data-sharing techniques can be developed for use in peace building or conflict management.
  • Four main themes emerged from discussions during the workshop:
    • “Data sharing requires working across a technology-culture divide”—Data sharing needs the foundation of a strong relationship, which can depend on sociocultural, rather than technological, factors.
    • “Information sharing requires building and maintaining trust”—These relationships are often built on trust, which can include both technological and social perspectives.
    • “Information sharing requires linking civilian-military policy discussions to technology”—Even when sophisticated data-sharing technologies exist, continuous engagement between different stakeholders is necessary. Therefore, procedures used to maintain civil-military engagement should be broadened to include technology.
    • “Collaboration software needs to be aligned with user needs”—technology providers need to keep in mind the needs of its users, in this case peacebuilders, in order to ensure sustainability.

United Nations Independent Expert Advisory Group on a Data Revolution for Sustainable Development. “A World That Counts, Mobilizing the Data Revolution.” 2014. https://bit.ly/2Cb3lXq

  • This report focuses on the potential benefits and risks data holds for sustainable development. Included in this is a strategic framework for using and managing data for humanitarian purposes. It describes a need for a multinational consensus to be developed to ensure data is shared effectively and efficiently.
  • It suggests that “people who are counted”—i.e., those who are included in data collection processes—have better development outcomes and a better chance for humanitarian response in emergency or conflict situations.

Katie Whipkey and Andrej Verity. “Guidance for Incorporating Big Data into Humanitarian Operations.” Digital Humanitarian Network, 2015. http://bit.ly/1Y2BMkQ

  • This report produced by the Digital Humanitarian Network provides an overview of big data, and how humanitarian organizations can integrate this technology into their humanitarian response. It primarily functions as a guide for organizations, and provides concise, brief outlines of what big data is, and how it can benefit humanitarian groups.
  • The report puts forward four main benefits acquired through the use of big data by humanitarian organizations: 1) the ability to leverage real-time information; 2) the ability to make more informed decisions; 3) the ability to learn new insights; 4) the ability for organizations to be more prepared.
  • It goes on to assess seven challenges big data poses for humanitarian organizations: 1) geography, and the unequal access to technology across regions; 2) the potential for user error when processing data; 3) limited technology; 4) questionable validity of data; 5) underdeveloped policies and ethics relating to data management; 6) limitations relating to staff knowledge.

Risks of Using Big Data in Humanitarian Context
Crawford, Kate, and Megan Finn. “The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters.” GeoJournal 80.4, 2015. http://bit.ly/1X0F7AI

  • Crawford & Finn present a critical analysis of the use of big data in disaster management, taking a more skeptical tone to the data revolution facing humanitarian response.
  • They argue that though social and mobile data analysis can yield important insights and tools in crisis events, it also presents a number of limitations which can lead to oversights being made by researchers or humanitarian response teams.
  • Crawford & Finn explore the ethical concerns the use of big data in disaster events introduces, including issues of power, privacy, and consent.
  • The paper concludes by recommending that critical data studies, such as those presented in the paper, be integrated into crisis event research in order to analyze some of the assumptions which underlie mobile and social data.

Jacobsen, Katja Lindskov (2010) Making design safe for citizens: A hidden history of humanitarian experimentation. Citizenship Studies 14.1: 89-103. http://bit.ly/1YaRTwG

  • This paper explores the phenomenon of “humanitarian experimentation,” where victims of disaster or conflict are the subjects of experiments to test the application of technologies before they are administered in greater civilian populations.
  • By analyzing the particular use of iris recognition technology during the repatriation of Afghan refugees to Pakistan in 2002 to 2007, Jacobsen suggests that this “humanitarian experimentation” compromises the security of already vulnerable refugees in order to better deliver biometric product to the rest of the world.

Responsible Data Forum. “Responsible Data Reflection Stories: An Overview.” http://bit.ly/1Rszrz1

  • This piece from the Responsible Data forum is primarily a compilation of “war stories” which follow some of the challenges in using big data for social good. By drawing on these crowdsourced cases, the Forum also presents an overview which makes key recommendations to overcome some of the challenges associated with big data in humanitarian organizations.
  • It finds that most of these challenges occur when organizations are ill-equipped to manage data and new technologies, or are unaware about how different groups interact in digital spaces in different ways.

Sandvik, Kristin Bergtora. “The humanitarian cyberspace: shrinking space or an expanding frontier?” Third World Quarterly 37:1, 17-32, 2016. http://bit.ly/1PIiACK

  • This paper analyzes the shift toward more technology-driven humanitarian work, where humanitarian work increasingly takes place online in cyberspace, reshaping the definition and application of aid. This has occurred along with what many suggest is a shrinking of the humanitarian space.
  • Sandvik provides three interpretations of this phenomena:
    • First, traditional threats remain in the humanitarian space, which are both modified and reinforced by technology.
    • Second, new threats are introduced by the increasing use of technology in humanitarianism, and consequently the humanitarian space may be broadening, not shrinking.
    • Finally, if the shrinking humanitarian space theory holds, cyberspace offers one example of this, where the increasing use of digital technology to manage disasters leads to a contraction of space through the proliferation of remote services.

Additional Readings on Data and Humanitarian Response

* Thanks to: Kristen B. Sandvik; Zara Rahman; Jennifer Schulte; Sean McDonald; Paul Currion; Dinorah Cantú-Pedraza and the Responsible Data Listserve for valuable input.

Accountable machines: bureaucratic cybernetics?


Alison Powell at LSE Media Policy Project Blog: “Algorithms are everywhere, or so we are told, and the black boxes of algorithmic decision-making make oversight of processes that regulators and activists argue ought to be transparent more difficult than in the past. But when, and where, and which machines do we wish to make accountable, and for what purpose? In this post I discuss how algorithms discussed by scholars are most commonly those at work on media platforms whose main products are the social networks and attention of individuals. Algorithms, in this case, construct individual identities through patterns of behaviour, and provide the opportunity for finely targeted products and services. While there are serious concerns about, for instance, price discrimination, algorithmic systems for communicating and consuming are, in my view, less inherently problematic than processes that impact on our collective participation and belonging as citizenship. In this second sphere, algorithmic processes – especially machine learning – combine with processes of governance that focus on individual identity performance to profoundly transform how citizenship is understood and undertaken.

Communicating and consuming

In the communications sphere, algorithms are what makes it possible to make money from the web for example through advertising brokerage platforms that help companies bid for ads on major newspaper websites. IP address monitoring, which tracks clicks and web activity, creates detailed consumer profiles and transform the everyday experience of communication into a constantly-updated production of consumer information. This process of personal profiling is at the heart of many of the concerns about algorithmic accountability. The consequence of perpetual production of data by individuals and the increasing capacity to analyse it even when it doesn’t appear to relate has certainly revolutionalised advertising by allowing more precise targeting, but what has it done for areas of public interest?

John Cheney-Lippold identifies how the categories of identity are now developed algorithmically, since a category like gender is not based on self-discloure, but instead on patterns of behaviour that fit with expectations set by previous alignment to a norm. In assessing ‘algorithmic identities’, he notes that these produce identity profiles which are narrower and more behaviour-based than the identities that we perform. This is a result of the fact that many of the systems that inspired the design of algorithmic systems were based on using behaviour and other markers to optimise consumption. Algorithmic identity construction has spread from the world of marketing to the broader world of citizenship – as evidenced by the Citizen Ex experiment shown at the Web We Want Festival in 2015.

Individual consumer-citizens

What’s really at stake is that the expansion of algorithmic assessment of commercially derived big data has extended the frame of the individual consumer into all kinds of other areas of experience. In a supposed ‘age of austerity’ when governments believe it’s important to cut costs, this connects with the view of citizens as primarily consumers of services, and furthermore, with the idea that a citizen is an individual subject whose relation to a state can be disintermediated given enough technology. So, with sensors on your garbage bins you don’t need to even remember to take them out. With pothole reporting platforms like FixMyStreet, a city government can be responsive to an aggregate of individual reports. But what aspects of our citizenship are collective? When, in the algorithmic state, can we expect to be together?

Put another way, is there any algorithmic process to value the long term education, inclusion, and sustenance of a whole community for example through library services?…

Seeing algorithms – machine learning in particular – as supporting decision-making for broad collective benefit rather than as part of ever more specific individual targeting and segmentation might make them more accountable. But more importantly, this would help algorithms support society – not just individual consumers….(More)”

Visualizing Potential Outbreaks of the Zika Virus


Google’s Official Blog: “The recent Zika virus outbreak has caused concern around the world. We’ve seen more than a 3,000 percent increase in global search interest since November, and last month, the World Health Organization (WHO) declared a Public Health Emergency. The possible correlation with Zika, microcephaly and other birth defects is particularly alarming.

But unlike many other global pandemics, the spread of Zika has been harder to identify, map and contain. It’s believed that 4 in 5 people with the virus don’t show any symptoms, and the primary transmitter for the disease, the Aedes mosquito species, is both widespread and challenging to eliminate. That means that fighting Zika requires raising awareness on how people can protect themselves, as well as supporting organizations who can help drive the development of rapid diagnostics and vaccines. We also have to find better ways to visualize the threat so that public health officials and NGO’s can support communities at risk….

A volunteer team of Google engineers, designers, and data scientists is helping UNICEF build a platform to process data from different sources (i.e., weather and travel patterns) in order to visualize potential outbreaks. Ultimately, the goal of this open source platform is to identify the risk of Zika transmission for different regions and help UNICEF, governments and NGO’s decide how and where to focus their time and resources. This set of tools is being prototyped for the Zika response, but will also be applicable to future emergencies….

We already include robust information for 900+ health conditions directly on Search for people in the U.S. We’ve now also added extensive information about Zika globally in 16 languages, with an overview of the virus, symptom information, and Public Health Alerts from that can be updated with new information as it becomes available.

We’re also working with popular YouTube creators across Latin America, including Sesame Street and Brazilian physician Drauzio Varella, to raise awareness about Zika prevention via their channels.

We hope these efforts are helpful in fighting this new public health emergency, and we will continue to do our part to help combat this outbreak.

And if you’re curious about what that 3,000 percent search increase looks like, take a look:….(More)

Value public information so we can trust it, rely on it and use it


Speech by David Fricker, the director general of the National Archives of Australia: “No-one can deny that we are in an age of information abundance. More and more we rely on information from a variety of sources and channels. Digital information is seductive, because it’s immediate, available and easy to move around. But digital information can be used for nefarious purposes. Social issues can be at odds with processes of government in this digital age. There is a tension between what is the information, where it comes from and how it’s going to be used.

How do we know if the information has reached us without being changed, whether that’s intentional or not?

How do we know that government digital information will be the authoritative source when the pace of information exchange is so rapid? In short, how do we know what to trust?

“It’s everyone’s responsibly to contribute to a transparent government, and that means changes in our thinking and in our actions.”

Consider the challenges and risks that come with the digital age: what does it really mean to have transparency and integrity of government in today’s digital environment?…

What does the digital age mean for government? Government should be delivering services online, which means thinking about location, timeliness and information accessibility. It’s about getting public-sector data out there, into the public, making it available to fuel the digital economy. And it’s about a process of change across government to make sure that we’re breaking down all of those silos, and the duplication and fragmentation which exist across government agencies in the application of information, communications, and technology…..

The digital age is about the digital economy, it’s about rethinking the economy of the nation through the lens of information that enables it. It’s understanding that a nation will be enriched, in terms of culture life, prosperity and rights, if we embrace the digital economy. And that’s a weighty responsibility. But the responsibility is not mine alone. It’s a responsibility of everyone in the government who makes records in their daily work. It’s everyone’s responsibly to contribute to a transparent government. And that means changes in our thinking and in our actions….

What has changed about democracy in the digital age? Once upon a time if you wanted to express your anger about something, you might write a letter to the editor of the paper, to the government department, or to your local member and then expect some sort of an argument or discussion as a response. Now, you can bypass all of that. You might post an inflammatory tweet or blog, your comment gathers momentum, you pick the right hashtag, and off we go. It’s all happening: you’re trending on Twitter…..

If I turn to transparency now, at the top of the list is the basic recognition that government information is public information. The information of the government belongs to the people who elected that government. It’s a fundamental of democratic values. It also means that there’s got to be more public participation in the development of public policy, which means if you’re going to have evidence-based, informed, policy development; government information has to be available, anywhere, anytime….

Good information governance is at the heart of managing digital information to provide access to that information into the future — ready access to government information is vital for transparency. Only when information is digital and managed well can government share it effectively with the Australian community, to the benefit of society and the economy.

There are many examples where poor information management, or poor information governance, has led to failures — both in the private and public sectors. Professor Peter Shergold’s recent report, Learning from Failure, why large government policy initiatives have gone so badly wrong in the past and how the chances of success in the future can be improved, highlights examples such as the Home Insulation Program, the NBN and Building the Education Revolution….(Full Speech)

Data Collaboratives: Matching Demand with Supply of (Corporate) Data to solve Public Problems


Blog by Stefaan G. Verhulst, IrynaSusha and Alexander Kostura: “Data Collaboratives refer to a new form of collaboration, beyond the public-private partnership model, in which participants from different sectors (private companies, research institutions, and government agencies) share data to help solve public problems. Several of society’s greatest challenges — from climate change to poverty — require greater access to big (but not always open) data sets, more cross-sector collaboration, and increased capacity for data analysis. Participants at the workshop and breakout session explored the various ways in which data collaborative can help meet these needs.

Matching supply and demand of data emerged as one of the most important and overarching issues facing the big and open data communities. Participants agreed that more experimentation is needed so that new, innovative and more successful models of data sharing can be identified.

How to discover and enable such models? When asked how the international community might foster greater experimentation, participants indicated the need to develop the following:

· A responsible data framework that serves to build trust in sharing data would be based upon existing frameworks but also accommodates emerging technologies and practices. It would also need to be sensitive to public opinion and perception.

· Increased insight into different business models that may facilitate the sharing of data. As experimentation continues, the data community should map emerging practices and models of sharing so that successful cases can be replicated.

· Capacity to tap into the potential value of data. On the demand side,capacity refers to the ability to pose good questions, understand current data limitations, and seek new data sets responsibly. On the supply side, this means seeking shared value in collaboration, thinking creatively about public use of private data, and establishing norms of responsibility around security, privacy, and anonymity.

· Transparent stock of available data supply, including an inventory of what corporate data exist that can match multiple demands and that is shared through established networks and new collaborative institutional structures.

· Mapping emerging practices and models of sharing. Corporate data offers value not only for humanitarian action (which was a particular focus at the conference) but also for a variety of other domains, including science,agriculture, health care, urban development, environment, media and arts,and others. Gaining insight in the practices that emerge across sectors could broaden the spectrum of what is feasible and how.

In general, it was felt that understanding the business models underlying data collaboratives is of utmost importance in order to achieve win-win outcomes for both private and public sector players. Moreover, issues of public perception and trust were raised as important concerns of government organizations participating in data collaboratives….(More)”

Designing a toolkit for policy makers


 at UK’s Open Policy Making Blog: “At the end of the last parliament, the Cabinet Office Open Policy Making team launched the Open Policy Making toolkit. This was about giving policy makers the actual tools that will enable them to develop policy that is well informed, creative, tested, and works. The starting point was addressing their needs and giving them what they had told us they needed to develop policy in an ever changing, fast paced and digital world. In a way, it was the culmination of the open policy journey we have been on with departments for the past 2 years. In the first couple of months we saw thousands of unique visits….

Our first version toolkit has been used by 20,000 policy makers. This gave us a huge audience to talk to to make sure that we continue to meet the needs of policy makers and keep the toolkit relevant and useful. Although people have really enjoyed using the toolkit, user testing quickly showed us a few problems…

We knew what we needed to do. Help people understand what Open Policy Making was, how it impacted their policy making, and then to make it as simple as possible for them to know exactly what to do next.

So we came up with some quick ideas on pen and paper and tested them with people. We quickly discovered what not to do. People didn’t want a philosophy— they wanted to know exactly what to do, practical answers, and when to do it. They wanted a sort of design manual for policy….

How do we make user-centered design and open policy making as understood as agile?

We decided to organise the tools around the journey of a policy maker. What might a policy maker need to understand their users? How could they co-design ideas? How could they test policy? We looked at what tools and techniques they could use at the beginning, middle and end of a project, and organised tools accordingly.

We also added sections to remove confusion and hesitation. Our opening section ‘Getting started with Open Policy Making’ provides people with a clear understanding of what open policy making might mean to them, but also some practical considerations. Sections for limited timeframes and budgets help people realise that open policy can be done in almost any situation.

And finally we’ve created a much cleaner and simpler design that lets people show as much or little of the information as they need….

So go and check out the new toolkit and make more open policy yourselves….(More)”