Paper by A Zuiderwijk, M Janssen in the Government Information Quarterly: “In developing open data policies, governments aim to stimulate and guide the publication of government data and to gain advantages from its use. Currently there is a multiplicity of open data policies at various levels of government, whereas very little systematic and structured research has been done on the issues that are covered by open data policies, their intent and actual impact. Furthermore, no suitable framework for comparing open data policies is available, as open data is a recent phenomenon and is thus in an early stage of development. In order to help bring about a better understanding of the common and differentiating elements in the policies and to identify the factors affecting the variation in policies, this paper develops a framework for comparing open data policies. The framework includes the factors of environment and context, policy content, performance indicators and public values. Using this framework, seven Dutch governmental policies at different government levels are compared. The comparison shows both similarities and differences among open data policies, providing opportunities to learn from each other’s policies. The findings suggest that current policies are rather inward looking, open data policies can be improved by collaborating with other organizations, focusing on the impact of the policy, stimulating the use of open data and looking at the need to create a culture in which publicizing data is incorporated in daily working processes. The findings could contribute to the development of new open data policies and the improvement of existing open data policies.”
People Powered Social Innovation: The Need for Citizen Engagement
Paper for the Lien Centre for Social Innovation (Singapore): “the Citizen engagement is widely regarded as critical to the development and implementation of social innovation. What is citizen engagement? What does it mean in the context of social innovation? Julie Simon and Anna Davies discuss the importance as well as the implications of engaging the ground…”
Using Social Media in Rulemaking: Possibilities and Barriers
New paper by Michael Herz (Cardozo Legal Studies Research Paper No. 417): “Web 2.0” is characterized by interaction, collaboration, non-static web sites, use of social media, and creation of user-generated content. In theory, these Web 2.0 tools can be harnessed not only in the private sphere but as tools for an e-topia of citizen engagement and participatory democracy. Notice-and-comment rulemaking is the pre-digital government process that most approached (while still falling far short of) the e-topian vision of public participation in deliberative governance. The notice-and-comment process for federal agency rulemaking has now changed from a paper process to an electronic one. Expectations for this switch were high; many anticipated a revolution that would make rulemaking not just more efficient, but also more broadly participatory, democratic, and dialogic. In the event, the move online has not produced a fundamental shift in the nature of notice-and-comment rulemaking. At the same time, the online world in general has come to be increasingly characterized by participatory and dialogic activities, with a move from static, text-based websites to dynamic, multi-media platforms with large amounts of user-generated content. This shift has not left agencies untouched. To the contrary, agencies at all levels of government have embraced social media – by late 2013 there were over 1000 registered federal agency twitter feeds and over 1000 registered federal agency Facebook pages, for example – but these have been used much more as tools for broadcasting the agency’s message than for dialogue or obtaining input. All of which invites the questions whether agencies could or should directly rely on social media in the rulemaking process.
This study reviews how federal agencies have been using social media to date and considers the practical and legal barriers to using social media in rulemaking, not just to raise the visibility of rulemakings, which is certainly happening, but to gather relevant input and help formulate the content of rules.
The study was undertaken for the Administrative Conference of the United States and is the basis for a set of recommendations adopted by ACUS in December 2013. Those recommendations overlap with but are not identical to the recommendations set out herein.”
Selected Readings on Data Visualization
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data visualization was originally published in 2013.
Data visualization is a response to the ever-increasing amount of information in the world. With big data, informatics and predictive analytics, we have an unprecedented opportunity to revolutionize policy-making. Yet data by itself can be overwhelming. New tools and techniques for visualizing information can help policymakers clearly articulate insights drawn from data. Moreover, the rise of open data is enabling those outside of government to create informative and visually arresting representations of public information that can be used to support decision-making by those inside or outside governing institutions.
Selected Reading List (in alphabetical order)
- D.J. Duke, K.W. Brodlie, D.A. Duce and I. Herman — Do You See What I Mean? [Data Visualization] — a paper arguing for a systematic ontology for data visualization.
- Michael Friendly — A Brief History of Data Visualization – a brief overview of the history of data visualization that traces the path from early cartography and statistical graphics to modern visualization practices.
- Alvaro Graves and James Hendler — Visualization Tools for Open Government Data — a paper arguing for wider use of visualization tools to open government data achieve its full potential for impact.
- César A. Hidalgo — Graphical Statistical Methods for the Representation of the Human Development Index and Its Components — an argument for visualizing complex data to aid in human development initiatives.
- Genie Stowers — The Use of Data Visualization in Government — a report aimed at helping public sector managers make the most of data visualization tools and practices.
Annotated Selected Reading List (in alphabetical order)
Duke, D.J., K.W. Brodlie, D.A. Duce and I. Herman. “Do You See What I Mean? [Data Visualization].” IEEE Computer Graphics and Applications 25, no. 3 (2005): 6–9. http://bit.ly/1aeU6yA.
- In this paper, the authors argue that a more systematic ontology for data visualization to ensure the successful communication of meaning. “Visualization begins when someone has data that they wish to explore and interpret; the data are encoded as input to a visualization system, which may in its turn interact with other systems to produce a representation. This is communicated back to the user(s), who have to assess this against their goals and knowledge, possibly leading to further cycles of activity. Each phase of this process involves communication between two parties. For this to succeed, those parties must share a common language with an agreed meaning.”
- That authors “believe that now is the right time to consider an ontology for visualization,” and “as visualization move from just a private enterprise involving data and tools owned by a research team into a public activity using shared data repositories, computational grids, and distributed collaboration…[m]eaning becomes a shared responsibility and resource. Through the Semantic Web, there is both the means and motivation to develop a shared picture of what we see when we turn and look within our own field.”
Friendly, Michael. “A Brief History of Data Visualization.” In Handbook of Data Visualization, 15–56. Springer Handbooks Comp.Statistics. Springer Berlin Heidelberg, 2008. http://bit.ly/17fM1e9.
- In this paper, Friendly explores the “deep roots” of modern data visualization. “These roots reach into the histories of the earliest map making and visual depiction, and later into thematic cartography, statistics and statistical graphics, medicine and other fields. Along the way, developments in technologies (printing, reproduction), mathematical theory and practice, and empirical observation and recording enabled the wider use of graphics and new advances in form and content.”
- Just as the general the visualization of data is far from a new practice, Friendly shows that the graphical representation of government information has a similarly long history. “The collection, organization and dissemination of official government statistics on population, trade and commerce, social, moral and political issues became widespread in most of the countries of Europe from about 1825 to 1870. Reports containing data graphics were published with some regularity in France, Germany, Hungary and Finland, and with tabular displays in Sweden, Holland, Italy and elsewhere.”
Graves, Alvaro and James Hendler. “Visualization Tools for Open Government Data.” In Proceedings of the 14th Annual International Conference on Digital Government Research, 136–145. Dg.o ’13. New York, NY, USA: ACM, 2013. http://bit.ly/1eNSoXQ.
- In this paper, the authors argue that, “there is a gap between current Open Data initiatives and an important part of the stakeholders of the Open Government Data Ecosystem.” As it stands, “there is an important portion of the population who could benefit from the use of OGD but who cannot do so because they cannot perform the essential operations needed to collect, process, merge, and make sense of the data. The reasons behind these problems are multiple, the most critical one being a fundamental lack of expertise and technical knowledge. We propose the use of visualizations to alleviate this situation. Visualizations provide a simple mechanism to understand and communicate large amounts of data.”
- The authors also describe a prototype of a tool to create visualizations based on OGD with the following capabilities:
- Facilitate visualization creation
- Exploratory mechanisms
- Viralization and sharing
- Repurpose of visualizations
Hidalgo, César A. “Graphical Statistical Methods for the Representation of the Human Development Index and Its Components.” United Nations Development Programme Human Development Reports, September 2010. http://bit.ly/166TKur.
- In this paper for the United Nations Human Development Programme, Hidalgo argues that “graphical statistical methods could be used to help communicate complex data and concepts through universal cognitive channels that are heretofore underused in the development literature.”
- To support his argument, representations are provided that “show how graphical methods can be used to (i) compare changes in the level of development experienced by countries (ii) make it easier to understand how these changes are tied to each one of the components of the Human Development Index (iii) understand the evolution of the distribution of countries according to HDI and its components and (iv) teach and create awareness about human development by using iconographic representations that can be used to graphically narrate the story of countries and regions.”
Stowers, Genie. “The Use of Data Visualization in Government.” IBM Center for The Business of Government, Using Technology Series, 2013. http://bit.ly/1aame9K.
- This report seeks “to help public sector managers understand one of the more important areas of data analysis today — data visualization. Data visualizations are more sophisticated, fuller graphic designs than the traditional spreadsheet charts, usually with more than two variables and, typically, incorporating interactive features.”
- Stowers also offers numerous examples of “visualizations that include geographical and health data, or population and time data, or financial data represented in both absolute and relative terms — and each communicates more than simply the data that underpin it. In addition to these many examples of visualizations, the report discusses the history of this technique, and describes tools that can be used to create visualizations from many different kinds of data sets.”
Reinventing Participation: Civic Agency and the Web Environment
New paper by Peter Dahlgren: “Participation is a key concept in the vocabulary of democracy, and can encompass a variety of dimensions. Moreover, it can be shaped by a range of different factors; my emphasis here is on the significance of the web environment in this regard. I first situate participation against the backdrop of democracy’s contemporary developments, including the onslaught of neolibealism. From there I offer a set of parameters that can help us grasp participation both conceptually and empirically: trajectory, visibility, voice , and sociality, and relate these to the affordances of the digital media. Thereafter I explore the cultural resources necessary for the facilitation of participation; for this I make use of a six-dimensional model of civic cultures. My discussion focuses on two of the dimensions, practices and identities; I again relate these to the web environment. I conclude with a dilemma that online democratic participation faces, namely what I call the isolation of the solo sphere, yet affirm that we are justified in maintaining a guarded optimism about the future of participation.”
Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation
New paper by Henry Sauermann and Chiara Franzoni: “Crowd-based knowledge production is attracting growing attention from scholars and practitioners. One key premise is that participants who have an intrinsic “interest” in a topic or activity are willing to expend effort at lower pay than in traditional employment relationships. However, it is not clear how strong and sustainable interest is as a source of motivation. We draw on research in psychology to discuss important static and dynamic features of interest and derive a number of research questions regarding interest-based effort in crowd-based projects. Among others, we consider the specific versus general nature of interest, highlight the potential role of matching between projects and individuals, and distinguish the intensity of interest at a point in time from the development and sustainability of interest over time. We then examine users’ participation patterns within and across 7 different crowd science projects that are hosted on a shared platform. Our results provide novel insights into contribution dynamics in crowd science projects. Moreover, given that extrinsic incentives such as pay, status, self-use, or career benefits are largely absent in these particular projects, the data also provide unique insights into the dynamics of interest-based motivation and into its potential as a driver of effort.”
Building tech-powered public services
Could tech-powered public services be an affordable, sustainable solution to some of the challenges of these times of austerity?
This report looks at 20 case studies of digital innovation in public services, using these examples to explore the impact of new and disruptive technologies. It considers how tech-powered public services can be delivered, focusing on the area of health and social care in particular.
We identify three key benefits of increasing the role of technology in public services: saving time, boosting user participation, and encouraging users to take responsibility for their own wellbeing.
In terms of how to successfully implement technological innovations in public services, five particular lessons stood out clearly and consistently:
- User-based iterative design is critical to delivering a product that solves real-world problems. It builds trust and ensures the technology works in the context in which it will be used.
- Public sector expertise is essential in order for a project to make the connections necessary to initial development and early funding.
- Access to seed and bridge funding is necessary to get projects off the ground and allow them to scale up.
- Strong leadership from within the public sector is crucial to overcoming the resistance that practitioners and managers often show initially.
- A strong business case that sets out the quality improvements and cost savings that the innovation can deliver is important to get attention and interest from public services.
The seven headline case studies in this report are:
- Patchwork creates an elegant solution to join up professionals working with troubled families, in an effort to ensure that frontline support is truly coordinated.
- Casserole Club links people who like cooking with their neighbours who are in need of a hot meal, employing the simplest possible technology to grow social connections.
- ADL Smartcare uses a facilitated assessment tool to make professional expertise accessible to staff and service users without years of training, meaning they can carry out assessments together, engaging people in their own care and freeing up occupational therapists to focus where they are needed.
- Mental Elf makes leading research in mental health freely available via social media, providing accessible summaries to practitioners and patients who would not otherwise have the time or ability to read journal articles, which are often hidden behind a paywall.
- Patient Opinion provides an online platform for people to give feedback on the care they have received and for healthcare professionals and providers to respond, disrupting the typical complaints process and empowering patients and their families.
- The Digital Pen and form system has saved the pilot hospital trust three minutes per patient by avoiding the need for manual data entry, freeing up clinical and administrative staff for other tasks.
- Woodland Wiggle allows children in hospital to enter a magical woodland world through a giant TV screen, where they can have fun, socialise, and do their physiotherapy.”
Why This Company Is Crowdsourcing, Gamifying The World's Most Difficult Problems
FastCompany: “The biggest consultancy firms–the McKinseys and Janeses of the world–make many millions of dollars predicting the future and writing what-if reports for clients. This model is built on the idea that those companies know best–and that information and ideas should be handed down from on high.
But one consulting house, Wikistrat, is upending the model: Instead of using a stable of in-house analysts, the company crowdsources content and pays the crowd for its time. Wikistrat’s hundreds of analysts–primarily consultants, academics, journalists, and retired military personnel–are compensated for participating in what they call “crowdsourced simulations.” In other words, make money for brainstorming.
According to Joel Zamel, Wikistrat’s founder, approximately 850 experts in various fields rotate in and out of different simulations and project exercises for the company. While participating in a crowdsourced simulation, consultants are are paid a flat fee plus performance bonuses based on a gamification engine where experts compete to win extra cash. The company declined revealing what the fee scale is, but as of 2011 bonus money appears to be in the $10,000 range.
Zamel characterizes the company’s clients as a mix of government agencies worldwide and multinational corporations. The simulations are semi-anonymous for players; consultants don’t know who their paper is being written for or who the end consumer is, but clients know which of Wikistrat’s contestants are participating in the brainstorm exercise. Once an exercise is over, the discussions from the exercise are taken by full-time employees at Wikistrat and converted into proper reports for clients.
“We’ve developed a quite significant crowd network and a lot of functionality into the platform,” Zamel tells Fast Company. “It uses a gamification engine we created that incentivizes analysts by ranking them at different levels for the work they do on the platform. They are immediately rewarded through the engine, and we also track granular changes made in real time. This allows us to track analyst activity and encourages them to put time and energy into Wiki analysis.” Zamel says projects typically run between three and four weeks, with between 50 and 100 analysts working on a project for generally between five and 12 hours per week. Most of the analysts, he says, view this as a side income on top of their regular work at day jobs but some do much more: Zamel cited one PhD candidate in Australia working 70 hours a week on one project instead of 10 to 15 hours.
Much of Wikistrat’s output is related to current events. Although Zamel says the bulk of their reports are written for clients and not available for public consumption, Wikistrat does run frequent public simulations as a way of attracting publicity and recruiting talent for the organization. Their most recent crowdsourced project is called Myanmar Moving Forward and runs from November 25 to December 9. According to Wikistrat, they are asking their “Strategic community to map out Myanmar’s current political risk factor and possible futures (positive, negative, or mixed) for the new democracy in 2015. The simulation is designed to explore the current social, political, economic, and geopolitical threats to stability–i.e. its political risk–and to determine where the country is heading in terms of its social, political, economic, and geopolitical future.”…
Selected Readings on Crowdsourcing Data
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing data was originally published in 2013.
As institutions seek to improve decision-making through data and put public data to use to improve the lives of citizens, new tools and projects are allowing citizens to play a role in both the collection and utilization of data. Participatory sensing and other citizen data collection initiatives, notably in the realm of disaster response, are allowing citizens to crowdsource important data, often using smartphones, that would be either impossible or burdensomely time-consuming for institutions to collect themselves. Civic hacking, often performed in hackathon events, on the other hand, is a growing trend in which governments encourage citizens to transform data from government and other sources into useful tools to benefit the public good.
Selected Reading List (in alphabetical order)
- Chris Baraniuk — Power Politechs — an article on civic hacking focusing on its potential for positive impact with an eye toward debating how the practice should evolve going forward.
- Brandon Barnett, Muki Hansteen Izora and Jose Sia — Civic Hackathon Challenges Design Principles: Making Data Relevant and Useful for Individuals and Communities — a collection of hackathon design principles from researchers at Intel Labs based on their experience at Hack for Change.
- Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling — Amazon’s Mechanical Turk A New Source of Inexpensive, Yet High-Quality, Data? — an article exploring the potential of Amazon’s Mechanical Turk platform to act as a useful data source for researchers.
- Michael F. Goodchild and J. Alan Glennon — Crowdsourcing Geographic Information for Disaster Response: a Research Frontier — an article discussing the potential and challenges in the growing trend of tapping citizens to provide geographic data for disaster response.
- Andrew Hudson-Smith, Michael Batty, Andrew Crooks and Richard Milton — Mapping for the Masses Accessing Web 2.0 Through Crowdsourcing — a paper exploring the growing “neogeography” enabled by crowdsourced mapping tools.
- Hudson-Smith, Andrew, Michael Batty, Andrew Crooks, and Richard Milton — Mapping for the Masses Accessing Web 2.0 Through Crowdsourcing — an in-depth study of the growing participatory sensing paradigm.
Annotated Selected Reading List (in alphabetical order)
Baraniuk, Chris. “Power Politechs.” New Scientist 218, no. 2923 (June 29, 2013): 36–39. http://bit.ly/167ul3J.
- In this article, Baraniuk discusses civic hackers, “an army of volunteer coders who are challenging preconceptions about hacking and changing the way your government operates. In a time of plummeting budgets and efficiency drives, those in power have realised they needn’t always rely on slow-moving, expensive outsourcing and development to improve public services. Instead, they can consider running a hackathon, at which tech-savvy members of the public come together to create apps and other digital tools that promise to enhace the provision of healthcare, schools or policing.”
- While recognizing that “civic hacking has established a pedigree that demonstrates its potential for positive impact,” Baraniuk argues that a “more rigorous debate over how this activity should evolve, or how authorities ought to engage in it” is needed.
Barnett, Brandon, Muki Hansteen Izora, and Jose Sia. “Civic Hackathon Challenges Design Principles: Making Data Relevant and Useful for Individuals and Communities.” Hack for Change, https://bit.ly/2Ge6z09.
- In this paper, researchers from Intel Labs offer “guiding principles to support the efforts of local civic hackathon organizers and participants as they seek to design actionable challenges and build useful solutions that will positively benefit their communities.”
- The authors proposed design principles are:
- Focus on the specific needs and concerns of people or institutions in the local community. Solve their problems and challenges by combining different kinds of data.
- Seek out data far and wide (local, municipal, state, institutional, non-profits, companies) that is relevant to the concern or problem you are trying to solve.
- Keep it simple! This can’t be overstated. Focus [on] making data easily understood and useful to those who will use your application or service.
- Enable users to collaborate and form new communities and alliances around data.
Buhrmester, Michael, Tracy Kwang, and Samuel D. Gosling. “Amazon’s Mechanical Turk A New Source of Inexpensive, Yet High-Quality, Data?” Perspectives on Psychological Science 6, no. 1 (January 1, 2011): 3–5. http://bit.ly/H56lER.
- This article examines the capability of Amazon’s Mechanical Turk to act a source of data for researchers, in addition to its traditional role as a microtasking platform.
- The authors examine the demographics of MTurkers and find that “MTurk participants are slightly more demographically diverse than are standard Internet samples and are significantly more diverse than typical American college samples; (b) participation is affected by compensation rate and task length, but participants can still be recruited rapidly and inexpensively; (c) realistic compensation rates do not affect data quality; and (d) the data obtained are at least as reliable as those obtained via traditional methods.”
- The paper concludes that, just as MTurk can be a strong tool for crowdsourcing tasks, data derived from MTurk can be high quality while also being inexpensive and obtained rapidly.
Goodchild, Michael F., and J. Alan Glennon. “Crowdsourcing Geographic Information for Disaster Response: a Research Frontier.” International Journal of Digital Earth 3, no. 3 (2010): 231–241. http://bit.ly/17MBFPs.
- This article examines issues of data quality in the face of the new phenomenon of geographic information being generated by citizens, in order to examine whether this data can play a role in emergency management.
- The authors argue that “[d]ata quality is a major concern, since volunteered information is asserted and carries none of the assurances that lead to trust in officially created data.”
- Due to the fact that time is crucial during emergencies, the authors argue that, “the risks associated with volunteered information are often outweighed by the benefits of its use.”
- The paper examines four wildfires in Santa Barbara in 2007-2009 to discuss current challenges with volunteered geographical data, and concludes that further research is required to answer how volunteer citizens can be used to provide effective assistance to emergency managers and responders.
Hudson-Smith, Andrew, Michael Batty, Andrew Crooks, and Richard Milton. “Mapping for the Masses Accessing Web 2.0 Through Crowdsourcing.” Social Science Computer Review 27, no. 4 (November 1, 2009): 524–538. http://bit.ly/1c1eFQb.
- This article describes the way in which “we are harnessing the power of web 2.0 technologies to create new approaches to collecting, mapping, and sharing geocoded data.”
- The authors examine GMapCreator and MapTube, which allow users to do a range of map-related functions such as create new maps, archive existing maps, and share or produce bottom-up maps through crowdsourcing.
- They conclude that “these tools are helping to define a neogeography that is essentially ‘mapping for the masses,’ while noting that there are many issues of quality, accuracy, copyright, and trust that will influence the impact of these tools on map-based communication.”
Kanhere, Salil S. “Participatory Sensing: Crowdsourcing Data from Mobile Smartphones in Urban Spaces.” In Distributed Computing and Internet Technology, edited by Chittaranjan Hota and Pradip K. Srimani, 19–26. Lecture Notes in Computer Science 7753. Springer Berlin Heidelberg. 2013. https://bit.ly/2zX8Szj.
- This paper provides a comprehensive overview of participatory sensing — a “new paradigm for monitoring the urban landscape” in which “ordinary citizens can collect multi-modal data streams from the surrounding environment using their mobile devices and share the same using existing communications infrastructure.”
- In addition to examining a number of innovative applications of participatory sensing, Kanhere outlines the following key research challenges:
- Dealing with incomplete samples
- Inferring user context
- Protecting user privacy
- Evaluating data trustworthiness
- Conserving energy