Crowdsourcing Parking Lot Occupancy using a Mobile Phone Application


Paper by Davami, Erfan; Sukthankar, Gita available at ASE@360: “Participatory sensing is a specialized form of crowdsourcing for mobile devices in which the users act as sensors to report on local environmental conditions. • This poster describes the process of prototyping a mobile phone crowdsourcing app for monitoring parking availability on a large university campus. • We present a case study of how an agent-based urban model can be used to perform a sensitivity analysis of the comparative susceptibility of different data fusion paradigms to potentially troublesome user behaviors: 1. Poor user enrollment, 2. Infrequent usage, 3. A preponderance of untrustworthy users.”

The Emerging Power of Big Data


New America Foundation Report on the Chicago experience of using big data: “Big data is transforming the commercial marketplace but it also has the potential to reshape government affairs and urban development.  In a new report from the Emerging Leaders Program at the Chicago Council of Global Affairs, Lincoln S. Ellis, a founding member of the World Economic Roundtable, and other authors from the Emerging Leaders Program, explore how big data can be used by mega-cities to meet the challenges they face in an age of resource constraints to improve the lives of their residents.
Using Chicago as a case study, the report examines how the explosion of data availability enables cities to do more with less—to improve government services, fund much needed transportation, provide better education, and guarantee public safety.  And do more with less is what many cities have had to do over the past five years because many cities have had to cut their budgets and reduce the number of public employees in the post-financial crisis economy.  It is also what they will need to continue to do in the future.
“Unfortunately, resource constraints are a consistent feature of the post-crisis global landscape,” argues Ellis.  “Happily, so too is the renaissance in productivity gains garnered by our ability to leverage technology and information to achieve our most important public purposes in a smarter and more efficient way.”
Click here to view the report as a PDF.”

Selected Readings on Crowdsourcing Tasks and Peer Production


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.

Technological advances are creating a new paradigm by which institutions and organizations are increasingly outsourcing tasks to an open community, allocating specific needs to a flexible, willing and dispersed workforce. “Microtasking” platforms like Amazon’s Mechanical Turk are a burgeoning source of income for individuals who contribute their time, skills and knowledge on a per-task basis. In parallel, citizen science projects – task-based initiatives in which citizens of any background can help contribute to scientific research – like Galaxy Zoo are demonstrating the ability of lay and expert citizens alike to make small, useful contributions to aid large, complex undertakings. As governing institutions seek to do more with less, looking to the success of citizen science and microtasking initiatives could provide a blueprint for engaging citizens to help accomplish difficult, time-consuming objectives at little cost. Moreover, the incredible success of peer-production projects – best exemplified by Wikipedia – instills optimism regarding the public’s willingness and ability to complete relatively small tasks that feed into a greater whole and benefit the public good. You can learn more about this new wave of “collective intelligence” by following the MIT Center for Collective Intelligence and their annual Collective Intelligence Conference.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006. http://bit.ly/1aaU7Yb.

  • In this book, Benkler “describes how patterns of information, knowledge, and cultural production are changing – and shows that the way information and knowledge are made available can either limit or enlarge the ways people can create and express themselves.”
  • In his discussion on Wikipedia – one of many paradigmatic examples of people collaborating without financial reward – he calls attention to the notable ongoing cooperation taking place among a diversity of individuals. He argues that, “The important point is that Wikipedia requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in…”

Brabham, Daren C. Using Crowdsourcing in Government. Collaborating Across Boundaries Series. IBM Center for The Business of Government, 2013. http://bit.ly/17gzBTA.

  • In this report, Brabham categorizes government crowdsourcing cases into a “four-part, problem-based typology, encouraging government leaders and public administrators to consider these open problem-solving techniques as a way to engage the public and tackle difficult policy and administrative tasks more effectively and efficiently using online communities.”
  • The proposed four-part typology describes the following types of crowdsourcing in government:
    • Knowledge Discovery and Management
    • Distributed Human Intelligence Tasking
    • Broadcast Search
    • Peer-Vetted Creative Production
  • In his discussion on Distributed Human Intelligence Tasking, Brabham argues that Amazon’s Mechanical Turk and other microtasking platforms could be useful in a number of governance scenarios, including:
    • Governments and scholars transcribing historical document scans
    • Public health departments translating health campaign materials into foreign languages to benefit constituents who do not speak the native language
    • Governments translating tax documents, school enrollment and immunization brochures, and other important materials into minority languages
    • Helping governments predict citizens’ behavior, “such as for predicting their use of public transit or other services or for predicting behaviors that could inform public health practitioners and environmental policy makers”

Boudreau, Kevin J., Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley. “From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 14-060. January 23, 2014. https://bit.ly/2QVmGUu.

  • In this working paper, the authors explore the “conditions necessary for eliciting effort from those affecting the quality of interdependent teamwork” and “consider the the role of incentives versus social processes in catalyzing collaboration.”
  • The paper’s findings are based on an experiment involving 260 individuals randomly assigned to 52 teams working toward solutions to a complex problem.
  • The authors determined the level of effort in such collaborative undertakings are sensitive to cash incentives. However, collaboration among teams was driven more by the active participation of teammates, rather than any monetary reward.

Franzoni, Chiara, and Henry Sauermann. “Crowd Science: The Organization of Scientific Research in Open Collaborative Projects.” Research Policy (August 14, 2013). http://bit.ly/HihFyj.

  • In this paper, the authors explore the concept of crowd science, which they define based on two important features: “participation in a project is open to a wide base of potential contributors, and intermediate inputs such as data or problem solving algorithms are made openly available.” The rationale for their study and conceptual framework is the “growing attention from the scientific community, but also policy makers, funding agencies and managers who seek to evaluate its potential benefits and challenges. Based on the experiences of early crowd science projects, the opportunities are considerable.”
  • Based on the study of a number of crowd science projects – including governance-related initiatives like Patients Like Me – the authors identify a number of potential benefits in the following categories:
    • Knowledge-related benefits
    • Benefits from open participation
    • Benefits from the open disclosure of intermediate inputs
    • Motivational benefits
  • The authors also identify a number of challenges:
    • Organizational challenges
    • Matching projects and people
    • Division of labor and integration of contributions
    • Project leadership
    • Motivational challenges
    • Sustaining contributor involvement
    • Supporting a broader set of motivations
    • Reconciling conflicting motivations

Kittur, Aniket, Ed H. Chi, and Bongwon Suh. “Crowdsourcing User Studies with Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453–456. CHI ’08. New York, NY, USA: ACM, 2008. http://bit.ly/1a3Op48.

  • In this paper, the authors examine “[m]icro-task markets, such as Amazon’s Mechanical Turk, [which] offer a potential paradigm for engaging a large number of users for low time and monetary costs. [They] investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks.”
  • The authors conclude that in addition to providing a means for crowdsourcing small, clearly defined, often non-skill-intensive tasks, “Micro-task markets such as Amazon’s Mechanical Turk are promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.”

Kittur, Aniket, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. “The Future of Crowd Work.” In 16th ACM Conference on Computer Supported Cooperative Work (CSCW 2013), 2012. http://bit.ly/1c1GJD3.

  • In this paper, the authors discuss paid crowd work, which “offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale.” However, they caution that, “it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework.”
  • The authors argue that seven key challenges must be met to ensure that crowd work processes evolve and reach their full potential:
    • Designing workflows
    • Assigning tasks
    • Supporting hierarchical structure
    • Enabling real-time crowd work
    • Supporting synchronous collaboration
    • Controlling quality

Madison, Michael J. “Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo.” In Convening Cultural Commons, 2013. http://bit.ly/1ih9Xzm.

  • This paper explores a “case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place, Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis on the Internet. In the second place, Galaxy Zoo is a highly successful example of peer production, some times known as crowdsourcing…In the third place, is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies.”
  • Madison concludes that the success of Galaxy Zoo has not been the result of the “character of its information resources (scientific data) and rules regarding their usage,” but rather, the fact that the “community was guided from the outset by a vision of a specific organizational solution to a specific research problem in astronomy, initiated and governed, over time, by professional astronomers in collaboration with their expanding universe of volunteers.”

Malone, Thomas W., Robert Laubacher and Chrysanthos Dellarocas. “Harnessing Crowds: Mapping the Genome of Collective Intelligence.” MIT Sloan Research Paper. February 3, 2009. https://bit.ly/2SPjxTP.

  • In this article, the authors describe and map the phenomenon of collective intelligence – also referred to as “radical decentralization, crowd-sourcing, wisdom of crowds, peer production, and wikinomics – which they broadly define as “groups of individuals doing things collectively that seem intelligent.”
  • The article is derived from the authors’ work at MIT’s Center for Collective Intelligence, where they gathered nearly 250 examples of Web-enabled collective intelligence. To map the building blocks or “genes” of collective intelligence, the authors used two pairs of related questions:
    • Who is performing the task? Why are they doing it?
    • What is being accomplished? How is it being done?
  • The authors concede that much work remains to be done “to identify all the different genes for collective intelligence, the conditions under which these genes are useful, and the constraints governing how they can be combined,” but they believe that their framework provides a useful start and gives managers and other institutional decisionmakers looking to take advantage of collective intelligence activities the ability to “systematically consider many possible combinations of answers to questions about Who, Why, What, and How.”

Mulgan, Geoff. “True Collective Intelligence? A Sketch of a Possible New Field.” Philosophy & Technology 27, no. 1. March 2014. http://bit.ly/1p3YSdd.

  • In this paper, Mulgan explores the concept of a collective intelligence, a “much talked about but…very underdeveloped” field.
  • With a particular focus on health knowledge, Mulgan “sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to possible intellectual barriers to progress.”
  • He concludes that the “central message that comes from observing real intelligence is that intelligence has to be for something,” and that “turning this simple insight – the stuff of so many science fiction stories – into new theories, new technologies and new applications looks set to be one of the most exciting prospects of the next few years and may help give shape to a new discipline that helps us to be collectively intelligent about our own collective intelligence.”

Sauermann, Henry and Chiara Franzoni. “Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation.” SSRN Working Papers Series. November 28, 2013. http://bit.ly/1o6YB7f.

  • In this paper, Sauremann and Franzoni explore the issue of interest-based motivation in crowd-based knowledge production – in particular the use of the crowd science platform Zooniverse – by drawing on “research in psychology to discuss important static and dynamic features of interest and deriv[ing] a number of research questions.”
  • The authors find that interest-based motivation is often tied to a “particular object (e.g., task, project, topic)” not based on a “general trait of the person or a general characteristic of the object.” As such, they find that “most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return.”
  • They conclude that “interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production…However, both the scope and sustainability of this interest appear to be rather limited for the large majority of contributors…At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish.”

Schmitt-Sands, Catherine E. and Richard J. Smith. “Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk.” SSRN Working Papers Series. January 9, 2014. http://bit.ly/1ugaYja.

  • In this paper, the authors describe an experiment involving the nascent use of Amazon’s Mechanical Turk as a social science research tool. “While researchers have used crowdsourcing to find research subjects or classify texts, [they] used Mechanical Turk to conduct a policy scan of local government websites.”
  • Schmitt-Sands and Smith found that “crowdsourcing worked well for conducting an online policy program and scan.” The microtasked workers were helpful in screening out local governments that either did not have websites or did not have the types of policies and services for which the researchers were looking. However, “if the task is complicated such that it requires ongoing supervision, then crowdsourcing is not the best solution.”

Shirky, Clay. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press, 2008. https://bit.ly/2QysNif.

  • In this book, Shirky explores our current era in which, “For the first time in history, the tools for cooperating on a global scale are not solely in the hands of governments or institutions. The spread of the Internet and mobile phones are changing how people come together and get things done.”
  • Discussing Wikipedia’s “spontaneous division of labor,” Shirky argues that the process is like, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”

Silvertown, Jonathan. “A New Dawn for Citizen Science.” Trends in Ecology & Evolution 24, no. 9 (September 2009): 467–471. http://bit.ly/1iha6CR.

  • This article discusses the move from “Science for the people,” a slogan adopted by activists in the 1970s to “’Science by the people,’ which is “a more inclusive aim, and is becoming a distinctly 21st century phenomenon.”
  • Silvertown identifies three factors that are responsible for the explosion of activity in citizen science, each of which could be similarly related to the crowdsourcing of skills by governing institutions:
    • “First is the existence of easily available technical tools for disseminating information about products and gathering data from the public.
    • A second factor driving the growth of citizen science is the increasing realisation among professional scientists that the public represent a free source of labour, skills, computational power and even finance.
    • Third, citizen science is likely to benefit from the condition that research funders such as the National Science Foundation in the USA and the Natural Environment Research Council in the UK now impose upon every grantholder to undertake project-related science outreach. This is outreach as a form of public accountability.”

Szkuta, Katarzyna, Roberto Pizzicannella, David Osimo. “Collaborative approaches to public sector innovation: A scoping study.” Telecommunications Policy. 2014. http://bit.ly/1oBg9GY.

  • In this article, the authors explore cases where government collaboratively delivers online public services, with a focus on success factors and “incentives for services providers, citizens as users and public administration.”
  • The authors focus on six types of collaborative governance projects:
    • Services initiated by government built on government data;
    • Services initiated by government and making use of citizens’ data;
    • Services initiated by civil society built on open government data;
    • Collaborative e-government services; and
    • Services run by civil society and based on citizen data.
  • The cases explored “are all designed in the way that effectively harnesses the citizens’ potential. Services susceptible to collaboration are those that require computing efforts, i.e. many non-complicated tasks (e.g. citizen science projects – Zooniverse) or citizens’ free time in general (e.g. time banks). Those services also profit from unique citizens’ skills and their propensity to share their competencies.”

Data Mining Reddit Posts Reveals How to Ask For a Favor–And Get it


Emerging Technology From the arXiv: “There’s a secret to asking strangers for something and getting it. Now data scientists say they’ve discovered it by studying successful requests on the web

One of the more extraordinary phenomena on the internet is the rise of altruism and of websites designed to enable it. The Random Acts of Pizza section of the Reddit website is a good example.

People leave messages asking for pizza which others fulfil if they find the story compelling. As the site says: “because… who doesn’t like helping out a stranger? The purpose is to have fun, eat pizza and help each other out. Together, we aim to restore faith in humanity, one slice at a time.”

A request might go something like this: “It’s been a long time since my mother and I have had proper food. I’ve been struggling to find any kind of work so I can supplement my mom’s social security… A real pizza would certainly lift our spirits”. Anybody can then fulfil the order which is then marked on the site with a badge saying “got pizza’d”, often with notes of thanks.

That raises an interesting question. What kinds of requests are most successful in getting a response? Today, we get an answer thanks to the work of Tim Althoff at Stanford University and a couple of pals who lift the veil on the previously murky question of how to ask for a favour—and receive it.

They analysed how various features might be responsible for the success of a post, such as the politeness of the post; its sentiment, whether positive or negative for example; its length. The team also looked at the similarity of the requester to the benefactor; and also the status of the requester.

Finally, they examined whether the post contained evidence of need in the form of a narrative that described why the requester needed free pizza.

Althoff and co used a standard machine learning algorithm to comb through all the possible correlations in 70 per cent of the data, which they used for training. Having found various correlations, they tested to see whether this had predictive power in the remaining 30 per cent of the data. In other words, can their algorithm predict whether a previously unseen request will be successful or not?

It turns out that their algorithm makes a successful prediction about 70 per cent of the time. That’s far from perfect but much better than random guessing which is right only half the time.

So what kinds of factors are important? Narrative is a key part of many of the posts, so Althoff and co spent some time categorising the types of stories people use.

They divided the narratives into five types, those that mention: money; a job; being a student; family; and a final group that includes mentions of friends, being drunk, celebrating and so on, which Althoff and co call ‘craving’.

Of these, narratives about jobs, family and money increase the probability of success. Student narratives have no effect while craving narratives significantly reduce the chances of success. In other words, narratives that communicate a need are more successful than those that do not.

 “We find that clearly communicating need through the narrative is essential,” say Althoff and co. And evidence of reciprocation helps too.

(Given these narrative requirements, it is not surprising that longer requests tend to be more successful than short ones.)

So for example, the following request was successful because it clearly demonstrates both need and evidence of reciprocation.

“My gf and I have hit some hard times with her losing her job and then unemployment as well for being physically unable to perform her job due to various hand injuries as a server in a restaurant. She is currently petitioning to have unemployment reinstated due to medical reasons for being unable to perform her job, but until then things are really tight and ANYTHING would help us out right now.

I’ve been both a giver and receiver in RAOP before and would certainly return the favor again when I am able to reciprocate. It took everything we have to pay rent today and some food would go a long ways towards making our next couple of days go by much better with some food.”

By contrast, the ‘craving’ narrative below demonstrates neither and was not successful.

“My friend is coming in town for the weekend and my friends and i are so excited because we haven’t seen him since junior high. we are going to a high school football game then to the dollar theater after and it would be so nice if someone fed us before we embarked :)”

Althoff and co also say that the status of the requester is an important factor too. “We find that Reddit users with higher status overall (higher karma) or higher status within the subcommunity (previous posts) are significantly more likely to receive help,” they say.

But surprisingly, being polite does not help (except by offering thanks).

That’s interesting work. Until now, psychologists have never understood the factors that make requests successful, largely because it has always been difficult to separate the influence of the request from what is being requested.

The key here is that everybody making requests in this study wants the same thing—pizza. In one swoop, this makes the data significantly easier to tease apart.

An important line of future work will be in using his work to understand altruistic behaviour in other communities too…

Ref:  http://arxiv.org/abs/1405.3282 : How to Ask for a Favor: A Case Study on the Success of Altruistic Requests”

The Weird, Wild World of Citizen Science Is Already Here


David Lang in Wired: “Up and down the west coast of North America, countless numbers of starfish are dying. The affliction, known as Sea Star Wasting Syndrome, is already being called the biggest die-off of sea stars in recorded history, and we’re still in the dark as to what’s causing it or what it means. It remains an unsolved scientific mystery. The situation is also shaping up as a case study of an unsung scientific opportunity: the rise of citizen science and exploration.
The sea star condition was first noticed by Laura James, a diver and underwater videographer based in Seattle. As they began washing up on the shore near her home with lesions and missing limbs, she became concerned and notified scientists. Similar sightings started cropping up all along the West Coast, with gruesome descriptions of sea stars that were disintegrating in a matter of days, and populations that had been decimated. As scientists race to understand what’s happening, they’ve enlisted the help of amateurs like James, to move faster. Pete Raimondi’s lab at UC Santa Cruz has created the Sea Star Wasting Map, the baseline for monitoring the issue, to capture the diverse set of contributors and collaborators.
The map is one of many new models of citizen-powered science–a blend of amateurs and professionals, looking and learning together–that are beginning to emerge. Just this week, NASA endorsed a group of amateur astronomers to attempt to rescue a vintage U.S. spacecraft. NASA doesn’t have the money to do it, and this passionate group of citizen scientists can handle it.
Unfortunately, the term “citizen science” is terrible. It’s vague enough to be confusing, yet specific enough to seem exclusive. It’s too bad, too, because the idea of citizen science is thrilling. I love the notion that I can participate in the expanding pool of human knowledge and understanding, even though the extent of my formal science education is a high school biology class. To me, it seemed a genuine invitation to be curious. A safe haven for beginners. A license to explore.
Not everyone shares my romantic perspective, though. If you ask a university researcher, they’re likely to explain citizen science as a way for the public to contribute data points to larger, professionally run studies, like participating in the galaxy-spotting website Zooniverse or taking part in the annual Christmas Bird Count with the Audubon Society. It’s a model on the scientific fringes; using broad participation to fill the gaps in necessary data.
There’s power in this diffuse definition, though, as long as new interpretations are welcomed and encouraged. By inviting and inspiring people to ask their own questions, citizen science can become much more than a way of measuring bird populations. From the drone-wielding conservationists in South Africa to the makeshift biolabs in Brooklyn, a widening circle of participants are wearing the amateur badge with honor. And all of these groups–the makers, the scientists, the hobbyists–are converging to create a new model for discovery. In other words, the maker movement and the traditional science world are on a collision course.
To understand the intersection, it helps to know where each of those groups is coming from….”

New Research Suggests Collaborative Approaches Produce Better Plans


JPER: “In a previous blog post (see, http://goo.gl/pAjyWE), we discussed how many of the most influential articles in the Journal of Planning Education and Research (and in peer publications, like JAPA) over the last two decades have focused on communicative or collaborative planning. Proponents of these approaches, most notably Judith Innes, Patsy Healey, Larry Susskind, and John Forester, developed the idea that the collaborative and communicative structures that planners use impact the quality, legitimacy, and equity of planning outcomes. In practice, communicative theory has led to participatory initiatives, such as those observed in New Orleans (post-Katrina, http://goo.gl/A5J5wk), Chattanooga (to revitalize its downtown and riverfront, http://goo.gl/zlQfKB), and in many other smaller efforts to foment wider involvement in decision making. Collaboration has also impacted regional governance structures, leading to more consensus based forms of decision making, notably CALFED (SF Bay estuary governance, http://goo.gl/EcXx9Q) and transportation planning with Metropolitan Planning Organizations (MPOs)….
Most studies testing the implementation of collaborative planning have been case studies. Previous work by authors such as Innes and Booher, has provided valuable qualitative data about collaboration in planning, but few studies have attempted to empirically test the hypothesis that consensus building and participatory practices lead to better planning outcomes.
Robert Deyle (Florida State) and Ryan Weidenman (Atkins Global) build on previous case study research by surveying officials in involved in developing long-range transportation plans in 88 U.S. MPOs about the process and outcomes of those plans. The study tests the hypothesis that collaborative processes provide better outcomes and enhanced long-term relationships in situations where “many stakeholders with different needs” have “shared interests in common resources or challenges” and where “no actor can meet his/her interests without the cooperation of many others (Innes and Booher 2010, 7; Innes and Gruber 2005, 1985–2186). Current theory posits that consensus-based collaboration requires 1) the presence of all relevant interests, 2) mutual interdependence for goal achievement, and 3) honest and authentic dialog between participants (Innes and Booher 2010, 35–36, Deyle and Weidenmann, 2014).

Figure 2 Deyle and Weidenman (2014)
By surveying planning authorities, the authors found that most of the conditions (See Figure 2, above) posited in collaborative planning literature had statistically significant impacts on planning outcomes.These included perceptions of plan quality, participant satisfaction with the plan, as well as intangible outcomes that benefit both the participants and their ongoing collaboration efforts. However, having a planning process in which all or most decisions were made by consensus did not improve outcomes.  ….
Deyle, Robert E., and Ryan E. Wiedenman. “Collaborative Planning by Metropolitan Planning Organizations A Test of Causal Theory.” Journal of Planning Education and Research (2014): 0739456X14527621.
To access this article FREE until May 31 click the following links: Online, http://goo.gl/GU9inf, PDF, http://goo.gl/jehAf1.”

AU: Revitalising and revising the Innovation Showcase


From the Public Sector Innovation Toolkit unit of the Australian Government: “Do you have any case studies of innovative initiatives in the public service?
An important part of the public sector innovation agenda is sharing examples of innovation in practice. That’s why we created the Public Sector Innovation Showcase.
As noted in the APS Innovation Action Plan, “The Public Sector Innovation Showcase will enable government agencies and departments to share and celebrate case studies of innovation, and to consider how they might apply such innovative practices within their own operations to achieve better outcomes.”
The Showcase was a joint initiative with the Department of Finance and has been operating for a number of years now. We thought it time for some changes and that it needs some new examples.
To make the showcase more useful we have incorporated it into this site – you can see the examples here. We are eager to receive more examples from the public sector – from the Commonwealth, state, territory and local governments.
Please get in contact with us if you have an example that might be suitable as a case study of innovation in the public sector. The sort of things we’re after in the case studies are spelled out in our Showcase submission guidance.
We’re seeking examples that demonstrate doing things differently, rather than doing what we do now but slightly better.

HarassMap: Using Crowdsourced Data to Map Sexual Harassment in Egypt


Chelsea Young in Technology Innovation Management Review: “Through a case study of HarassMap, an advocacy, prevention, and response tool that uses crowdsourced data to map incidents of sexual harassment in Egypt, this article examines the application of crowdsourcing technology to drive innovation in the field of social policy. This article applies a framework that explores the potential, limitations, and future applications of crowdsourcing technology in this sector to reveal how crowdsourcing technology can be applied to overcome cultural and environmental constraints that have traditionally impeded the collection of data. Many of the lessons emerging from this case study hold relevance beyond the field of social policy. Applied to specific problems, this technology can be used to improve the efficiency and effectiveness of mitigation strategies, while facilitating rapid and informed decision making based on “good enough” data. However, this case also illustrates a number of challenges arising from the integrity of crowdsourced data and the potential for ethical conflict when using this data to inform policy formulation.”

Online tools for engaging citizens in the legislative process


Andrew Mandelbaum  from OpeningParliament.org: “Around the world, parliaments, governments, civil society organizations, and even individual parliamentarians, are taking measures to make the legislative process more participatory. Some are creating their own tools — often open source, which allows others to use these tools as well — that enable citizens to markup legislation or share ideas on targeted subjects. Others are purchasing and implementing tools developed by private companies to good effect. In several instances, these initiatives are being conducted through collaboration between public institutions and civil society, while many compliment online and offline experiences to help ensure that a broader population of citizens is reached.
The list below provides examples of some of the more prominent efforts to engage citizens in the legislative process.
Brazil
Implementer: Brazilian Chamber of Deputies
Website: http://edemocracia.camara.gov.br/
Additional Information: OpeningParliament.org Case Study
Estonia
Implementer: Estonian President & Civil Society
Project Name: Rahvakogu (The People’s Assembly)
Website: http://www.rahvakogu.ee/
Additional InformationEnhancing Estonia’s Democracy Through Rahvakogu
Finland
Implementer: Finnish Parliament
Project Name: Inventing Finland again! (Keksitään Suomi uudelleen!)
Website: http://www.suomijoukkoistaa.fi/
Additional Information: Democratic Participation and Deliberation in Crowdsourced Legislative Processes: The Case of the Law on Off-Road Traffic in Finland
France
Implementer: SmartGov – Démocratie Ouverte
Website: https://www.parlement-et-citoyens.fr/
Additional Information: OpeningParliament Case Study
Italy
Implementer: Government of Italy
Project Name: Public consultation on constitutional reform
Website: http://www.partecipa.gov.it/
Spain
Implementer: Basque Parliament
Website: http://www.adi.parlamentovasco.euskolegebiltzarra.org/es/
Additional Information: Participation in Parliament
United Kingdom
Implementer: Cabinet Office
Project Name: Open Standards Consultation
Website: http://consultation.cabinetoffice.gov.uk/openstandards/
Additional Information: Open Policy Making, Open Standards Consulation; Final Consultation Documents
United States
Implementer: OpenGov Foundation
Project Name: The Madison Project
Tool: The Madison Project

Open Data is a Civil Right


Yo Yoshida, Founder & CEO, Appallicious in GovTech: “As Americans, we expect a certain standardization of basic services, infrastructure and laws — no matter where we call home. When you live in Seattle and take a business trip to New York, the electric outlet in the hotel you’re staying in is always compatible with your computer charger. When you drive from San Francisco to Los Angeles, I-5 doesn’t all-of-a-sudden turn into a dirt country road because some cities won’t cover maintenance costs. If you take a 10-minute bus ride from Boston to the city of Cambridge, you know the money in your wallet is still considered legal tender.

But what if these expectations of consistency were not always a given? What if cities, counties and states had absolutely zero coordination when it came to basic services? This is what it is like for us in the open data movement. There are so many important applications and products that have been built by civic startups and concerned citizens. However, all too often these efforts are confided to city limits, and unavailable to anyone outside of them. It’s time to start reimagining the way cities function and how local governments operate. There is a wealth of information housed in local governments that should be public by default to help fuel a new wave of civic participation.
Appallicious’ Neighborhood Score provides an overall health and sustainability score, block-by-block for every neighborhood in the city of San Francisco. The first time metrics have been applied to neighborhoods so we can judge how government allocates our resources, so we can better plan how to move forward. But, if you’re thinking about moving to Oakland, just a subway stop away from San Francisco and want to see the score for a neighborhood, our app can’t help you, because that city has yet to release the data sets we need.
In Contra Costa County, there is the lifesaving PulsePoint app, which notifies smartphone users who are trained in CPR when someone nearby may be in need of help. This is an amazing app—for residents of Contra Costa County. But if someone in neighboring Alameda County needs CPR, the app, unfortunately, is completely useless.
Buildingeye visualizes planning and building permit data to allow users to see what projects are being proposed in their area or city. However, buildingeye is only available in a handful of places, simply because most cities have yet to make permits publicly available. Think about what this could do for the construction sector — an industry that has millions of jobs for Americans. Buildingeye also gives concerned citizens access to public documents like never before, so they can see what might be built in their cities or on their streets.
Along with other open data advocates, I have been going from city-to-city, county-to-county and state-to-state, trying to get governments and departments to open up their massive amounts of valuable data. Each time one city, or one county, agrees to make their data publicly accessible, I can’t help but think it’s only a drop in the bucket. We need to think bigger.
Every government, every agency and every department in the country that has already released this information to the public is a case study that points to the success of open data — and why every public entity should follow their lead. There needs to be a national referendum that instructs that all government data should be open and accessible to the public.
Last May, President Obama issued an executive order requiring that going forward, any data generated by the federal government must be made available to the public in open, machine-readable formats. In the executive order, Obama stated that, “openness in government strengthens our democracy, promotes the delivery of efficient and effective services to the public, and contributes to economic growth.”
If this is truly the case, Washington has an obligation to compel local and state governments to release their data as well. Many have tried to spur this effort. California Lt. Gov. Gavin Newsom created the Citizenville Challenge to speed up adoption on the local level. The U.S. Conference of Mayors has also been vocal in promoting open data efforts. But none of these initiatives could have the same effect of a federal mandate.
What I am proposing is no small feat, and it won’t happen overnight. But there should be a concerted effort by those in the technology industry, specifically civic startups, to call on Congress to draft legislation that would require every city in the country to make their data open, free and machine readable. Passing federal legislation will not be an easy task — but creating a “universal open data” law is possible. It would require little to no funding, and it is completely nonpartisan. It’s actually not a political issue at all; it is, for lack of a better word, and administrative issue.
Often good legislation is blocked because lawmakers and citizens are concerned about project funding. While there should be support to help cities and towns achieve the capability of opening their data, a lot of the time, they don’t need it. In 2009, the city and county of San Francisco opened up its data with zero dollars. Many other cities have done the same. There will be cities and municipalities that will need financial assistance to accomplish this. But it is worth it, and it will not require a significant investment for a substantial return. There are free online open data portals, like ckan, dkan and a new effort from Accela, CivicData.com, to centralize open data efforts.
When the UK Government recently announced a £1.5 million investment to support open data initiatives, its Cabinet Office Minister said, “We know that it creates a more accountable, efficient and effective government. Open Data is a raw material for economic growth, supporting the creation of new markets, business and jobs and helping us compete in the global race.”
We should not fall behind these efforts. There is too much at stake for our citizens, not to mention our economy. A recent McKinsey report found that making open data has the potential to create $3 trillion in value worldwide.
Former Speaker Tip O’Neil famously said, “all politics are local.” But we in the civic startup space believe all data is local. Data is reporting potholes in your neighborhood and identifying high crime areas in your communities. It’s seeing how many farmers’ markets there are in your town compared to liquor stores. Data helps predict which areas of a city are most at risk during a heat wave and other natural disasters. A federal open data law would give the raw material needed to create tools to improve the lives of all Americans, not just those who are lucky enough to live in a city that has released this information on its own.
It’s a different way of thinking about how a government operates and the relationship it has with its citizens. Open data gives the public an amazing opportunity to be more involved with governmental decisions. We can increase accountability and transparency, but most importantly we can revolutionize the way local residents communicate and work with their government.
Access to this data is a civil right. If this is truly a government by, of and for the people, then its data needs to be available to all of us. By opening up this wealth of information, we will design a better government that takes advantage of the technology and skills of civic startups and innovative citizens….”