OpEd by Hilary Pennington and Fay Twersky at the Chronicle on Philanthropy: “We value your feedback as a customer of our services. Would you be willing to answer a few questions at the end of this?”
Airlines, online retailers, medical offices, and restaurants all ask these kinds of questions. They recognize that getting regular customer feedback helps them continuously improve. It doesn’t mean they take every suggestion, or that businesses are handing over the reins of decisions to their customers.
Far from it.
But the consistent avenues for feedback do mean that businesses can listen and consider what they hear, and then make adjustments to respond to customer preferences, thereby improving their outcomes—the bottom line. Often, businesses publicly share the changes they make because customers appreciate responsive businesses.
What if the people meant to benefit from the programs that foundations support, as well as the nonprofits we finance, could contribute their needs, opinions, and experiences to help us improve our current grant-making programs and suggest ideas for the future? Imagine if all of us working for social and environmental change understood better what the intended beneficiaries of our work think and what we could do differently to ensure that we achieve our goals….
As foundation leaders, we believe that lack of openness and input from the people nonprofits serve prevents us from being as effective as we want and need to be. We have been asking ourselves how the foundation world can do better.
How can we learn more about the ways people experience the services and products our grantees provide? Do they find the services useful? Relevant? Are the hours of operation convenient? Is there room for improvement? If we knew the answers, might we also improve the outcomes?
It’s time to make gathering such feedback routine so that all of us, at both foundations and other nonprofits, reliably consider the perspectives and experiences of those we seek to help.
But we know such efforts are costly, in both time and money, and too few experiments have been conducted to figure out the most effective ways to get feedback that matters.
To help elevate the voices of the people our grant money is designed to help, we have joined with five other grant makers to create the Fund for Shared Insight, which will award $5-million to $6-million a year over the next three years.
In addition to Ford and Hewlett, we are joined by the David and Lucile Packard Foundation, the JPB Foundation, Liquidnet, the Rita Allen Foundation, and the W.K. Kellogg Foundation. Shared Insight will award one- to three-year grants to nonprofit organizations that seek new ways to get feedback and use the findings to improve their programs and services, and conduct research on whether those improvements—and the willingness to listen to clients—make a difference. We’ll also finance projects that take other steps to promote more openness among grant makers, nonprofits, and the public.”
Riding the Second Wave of Civic Innovation
Jeremy Goldberg at Governing: “Innovation and entrepreneurship in local government increasingly require mobilizing talent from many sectors and skill sets. Fortunately, the opportunities for nurturing cross-pollination between the public and private sectors have never been greater, thanks in large part to the growing role of organizations such as Bayes Impact, Code for America, Data Science for Social Good and Fuse Corps.
Indeed, there’s reason to believe that we might be entering an even more exciting period of public-private collaboration. As one local-government leader recently put it to me when talking about the critical mass of pro-bono civic-innovation efforts taking place across the San Francisco Bay area, “We’re now riding the second wave of civic pro-bono and civic innovation.”
As an alumni of Fuse Corps’ executive fellows program, I’m convinced that the opportunities initiated by it and similar organizations are integral to civic innovation. Fuse Corps brings civic entrepreneurs with experience across the public, private and nonprofit sectors to work closely with government employees to help them negotiate project design, facilitation and management hurdles. The organization’s leadership training emphasizes “smallifying” — building innovation capacity by breaking big challenges down into smaller tasks in a shorter timeframe — and making “little bets” — low-risk actions aimed at developing and testing an idea.
Since 2012, I have managed programs and cross-sector networks for the Silicon Valley Talent Partnership. I’ve witnessed a groundswell of civic entrepreneurs from across the region stepping up to participate in discussions and launch rapid-prototyping labs focused on civic innovation.
Cities across the nation are creating new roles and programs to engage these civic start-ups. They’re learning that what makes these projects, and specifically civic pro-bono programs, work best is a process of designing, building, operationalizing and bringing them to scale. If you’re setting out to create such a program, here’s a short list of best practices:
• Assets: Explore existing internal resources and knowledge to understand the history, departmental relationships and overall functions of the relevant agencies or departments. Develop a compendium of current service/volunteer programs.
• City policies/legal framework: Determine what the city charter, city attorney’s office or employee-relations rules and policies say about procurement, collective bargaining and public-private partnerships.
• Leadership: The support of the city’s top leadership is especially important during the formative stages of a civic-innovation program, so it’s important to understand how the city’s form of government will impact the program. For example, in a “strong mayor” government the ability to make definitive decisions on a public-private collaboration may be unlikely to face the same scrutiny as it might under a “council/mayor” government.
• Cross-departmental collaboration: This is essential. Without the support of city staff across departments, innovation projects are unlikely to take off. Convening a “tiger team” of individuals who are early adopters of such initiatives is important step. Ultimately, city staffers best understand the needs and demands of their departments or agencies.
• Partners from corporations and philanthropy: Leveraging existing partnerships will help to bring together an advisory group of cross-sector leaders and executives to participate in the early stages of program development.
• Business and member associations: For the Silicon Valley Talent Partnership, the Silicon Valley Leadership Group has been instrumental in advocating for pro-bono volunteerism with the cities of Fremont, San Jose and Santa Clara….”
Bloomberg Philanthropies Announces Major New Investment In City Halls' Capacity To Innovate
Press Release: “Bloomberg Philanthropies today announced a new $45 million investment to boost the capacity of city halls to use innovation to tackle major challenges and improve urban life. The foundation will direct significant funding and other assistance to help dozens of cities adopt the Innovation Delivery model, an approach to generating and implementing new ideas that has been tested and refined over the past three years in partnership with city leaders in Atlanta, Chicago, Louisville, Memphis, and New Orleans. …
Innovation Delivery Teams use best-in-class idea generation techniques with a structured, data-driven approach to delivering results. Operating as an in-house innovation consultancy, they have enabled mayors in the original five cities to produce clear results, such as:
- New Orleans reduced murder in 2013 by 19% compared to the previous year, resulting in the lowest number of murders in New Orleans since 1985.
- Memphis reduced retail vacancy rates by 30% along key commercial corridors.
- Louisville redirected 26% of low-severity 911 medical calls to a doctor’s office or immediate care center instead of requiring an ambulance trip to the emergency room.
- Chicago cut the licensing time for new restaurants by 33%; more than 1,000 new restaurants have opened since the Team began its work.
- Atlanta moved 1,022 chronically homeless individuals into permanent housing, quickly establishing itself as a national leader.
“Innovation Delivery has been an essential part of our effort to bring innovation, efficiency and improved services to our customers,” said Louisville Mayor Greg Fischer. “Philanthropy can play an important role in expanding the capacity of cities to deliver better, bolder results. Bloomberg Philanthropies is one of few foundations investing in this area, and it has truly been a game changer for our city.”
In addition to direct investments in cities, Bloomberg Philanthropies will fund technical assistance, research and evaluation, and partnerships with organizations to further spread the Innovation Delivery approach. The Innovation Delivery Playbook, which details the approach and some experiences of the original cities with which Bloomberg Philanthropies partnered, is available at: www.bloomberg.org …”
How technology is beating corruption
Jim Yong Kim at World Economic Forum: “Good governance is critical for all countries around the world today. When it doesn’t exist, many governments fail to deliver public services effectively, health and education services are often substandard and corruption persists in rich and poor countries alike, choking opportunity and growth. It will be difficult to reduce extreme poverty — let alone end it — without addressing the importance of good governance.
But this is not a hopeless situation. In fact, a new wave of progress on governance suggests we may be on the threshold of a transformational era. Countries are tapping into some of the most powerful forces in the world today to improve services and transparency. These forces include the spread of information technology and its convergence with grassroots movements for transparency, accountability and citizen empowerment. In some places, this convergence is easing the path to better-performing and more accountable governments.
The Philippines is a good example of a country embracing good governance. During a recent visit, I spoke with President Benigno Aquino about his plans to reduce poverty, create jobs, and ensure that economic growth is inclusive. He talked in great detail about how improving governance is a fundamentally important part of their strategy. The government has opened government data and contract information so citizens can see how their tax money is spent. The Foreign Aid Transparency Hub, launched after Typhoon Yolanda, offers a real-time look at pledges made and money delivered for typhoon recovery. Geo-tagging tools monitor assistance for people affected by the typhoon.
Opening budgets to scrutiny
This type of openness is spreading. Now many countries that once withheld information are opening their data and budgets to public scrutiny.
Late last year, my organization, the World Bank Group, established the Open Budgets Portal, a repository for budget data worldwide. So far, 13 countries have posted their entire public spending datasets online — including Togo, the first fragile state to do so.
In 2011, we helped Moldova become the first country in central Europe to launch an open data portal and put its expenditures online. Now the public and media can access more than 700 datasets, and are asking for more.
The original epicenter of the Arab Spring, Tunisia, recently passed a new constitution and is developing the first open budget data portal in the Middle East and North Africa. Tunisia has taken steps towards citizen engagement by developing a citizens’ budget and civil society-led platforms such as Marsoum41, to support freedom of information requests, including via mobile.
Using technology to improve services
Countries also are tapping into technology to improve public and private services. Estonia is famous for building an information technology infrastructure that has permitted widespread use of electronic services — everything from filing taxes online to filling doctors’ drug prescriptions.
In La Paz, Bolivia, a citizen feedback system known as OnTrack allows residents of one of the city’s marginalized neighbourhoods to send a text message on their mobile phones to provide feedback, make suggestions or report a problem related to public services.
In Pakistan, government departments in Punjab are using smart phones to collect real-time data on the activities of government field staff — including photos and geo-tags — to help reduce absenteeism and lax performance….”
Sharing Data Is a Form of Corporate Philanthropy
Matt Stempeck in HBR Blog: “Ever since the International Charter on Space and Major Disasters was signed in 1999, satellite companies like DMC International Imaging have had a clear protocol with which to provide valuable imagery to public actors in times of crisis. In a single week this February, DMCii tasked its fleet of satellites on flooding in the United Kingdom, fires in India, floods in Zimbabwe, and snow in South Korea. Official crisis response departments and relevant UN departments can request on-demand access to the visuals captured by these “eyes in the sky” to better assess damage and coordinate relief efforts.
Back on Earth, companies create, collect, and mine data in their day-to-day business. This data has quickly emerged as one of this century’s most vital assets. Public sector and social good organizations may not have access to the same amount, quality, or frequency of data. This imbalance has inspired a new category of corporate giving foreshadowed by the 1999 Space Charter: data philanthropy.
The satellite imagery example is an area of obvious societal value, but data philanthropy holds even stronger potential closer to home, where a wide range of private companies could give back in meaningful ways by contributing data to public actors. Consider two promising contexts for data philanthropy: responsive cities and academic research.
The centralized institutions of the 20th century allowed for the most sophisticated economic and urban planning to date. But in recent decades, the information revolution has helped the private sector speed ahead in data aggregation, analysis, and applications. It’s well known that there’s enormous value in real-time usage of data in the private sector, but there are similarly huge gains to be won in the application of real-time data to mitigate common challenges.
What if sharing economy companies shared their real-time housing, transit, and economic data with city governments or public interest groups? For example, Uber maintains a “God’s Eye view” of every driver on the road in a city:
Imagine combining this single data feed with an entire portfolio of real-time information. An early leader in this space is the City of Chicago’s urban data dashboard, WindyGrid. The dashboard aggregates an ever-growing variety of public datasets to allow for more intelligent urban management.
Over time, we could design responsive cities that react to this data. A responsive city is one where services, infrastructure, and even policies can flexibly respond to the rhythms of its denizens in real-time. Private sector data contributions could greatly accelerate these nascent efforts.
Data philanthropy could similarly benefit academia. Access to data remains an unfortunate barrier to entry for many researchers. The result is that only researchers with access to certain data, such as full-volume social media streams, can analyze and produce knowledge from this compelling information. Twitter, for example, sells access to a range of real-time APIs to marketing platforms, but the price point often exceeds researchers’ budgets. To accelerate the pursuit of knowledge, Twitter has piloted a program called Data Grants offering access to segments of their real-time global trove to select groups of researchers. With this program, academics and other researchers can apply to receive access to relevant bulk data downloads, such as an period of time before and after an election, or a certain geographic area.
Humanitarian response, urban planning, and academia are just three sectors within which private data can be donated to improve the public condition. There are many more possible applications possible, but few examples to date. For companies looking to expand their corporate social responsibility initiatives, sharing data should be part of the conversation…
Companies considering data philanthropy can take the following steps:
- Inventory the information your company produces, collects, and analyzes. Consider which data would be easy to share and which data will require long-term effort.
- Think who could benefit from this information. Who in your community doesn’t have access to this information?
- Who could be harmed by the release of this data? If the datasets are about people, have they consented to its release? (i.e. don’t pull a Facebook emotional manipulation experiment).
- Begin conversations with relevant public agencies and nonprofit partners to get a sense of the sort of information they might find valuable and their capacity to work with the formats you might eventually make available.
- If you expect an onslaught of interest, an application process can help qualify partnership opportunities to maximize positive impact relative to time invested in the program.
- Consider how you’ll handle distribution of the data to partners. Even if you don’t have the resources to set up an API, regular releases of bulk data could still provide enormous value to organizations used to relying on less-frequently updated government indices.
- Consider your needs regarding privacy and anonymization. Strip the data of anything remotely resembling personally identifiable information (here are some guidelines).
- If you’re making data available to researchers, plan to allow researchers to publish their results without obstruction. You might also require them to share the findings with the world under Open Access terms….”
Government, Foundations Turn to Cash Prizes to Generate Solutions
Megan O’Neil at the Chronicle of Philanthropy: “Government agencies and philanthropic organizations are increasingly staging competitions as a way generate interest in solving difficult technological, social, and environmental problems, according to a new report.
“The Craft of Prize Design: Lessons From the Public Sector” found that well-designed competitions backed by cash incentives can help organizations attract new ideas, mobilize action, and stimulate markets.
“Incentive prizes have transformed from an exotic open innovation to a proven innovation strategy for the public, private, and philanthropic sectors,” the report says.
Produced by Deloitte Consulting’s innovation practice, the report was financially supported by Bloomberg Philanthropies and the Case; Joyce; John S. and James L. Knight; Kresge; and Rockefeller foundations.
The federal government has staged more than 350 prize competitions during the past five years to stimulate innovation and crowdsource solutions, according to the report. And philanthropic organizations are also fronting prizes for competitions promoting innovative responses to questions such as how to strengthen communities and encourage sustainable energy consumption.
One example cited by the report is the Talent Dividend Prize, sponsored by CEOs for Cities and the Kresge Foundation, which awards $1-million to the city that most increases its college graduation rate during a four-year period. A second example is the MIT Clean Energy Prize, co-sponsored by the U.S. Department of Energy, which offered a total of $1 million in prize money. Submissions generated $85 million in capital and research grants, according to the report.
A prize-based project should not be adopted when an established approach to solve a problem already exists or if potential participants don’t have the interest or time to work on solving a problem, the report concludes. Instead, prize designers must gauge the capacity of potential participants before announcing a prize, and make sure that it will spur the discovery of new solutions.”
Lawsuit Would Force IRS to Release Nonprofit Tax Forms Digitally
Suzanne Perry at the Chronicle of Philanthropy on how “Open Data Could Shine a Light on Pay and Lobbying”: “Nonprofits that want to find out what their peers are doing can find a wealth of information in the forms the groups must file each year with the Internal Revenue Service—how much they pay their chief executives, how much they spend on fundraising, who is on their boards, where they offer services.
But the way the IRS makes those data available harkens to the digital dark ages, and critics who want to overhaul the system have been shaking up the generally polite nonprofit world with legal challenges, charges of monopoly, and talk of “disrupting” the status quo.
The issue will take center stage in a courtroom this week when a federal district judge in San Francisco is scheduled to consider arguments about whether to approve the IRS’s move to dismiss a lawsuit filed by an open-records group.
The group wants to obtain some specific Forms 990s, the informational tax documents filed by nonprofits, in a format that can be read by computers.
In theory, that shouldn’t be difficult since the nine nonprofits involved— including the American National Standards Institute, the New Horizons Foundation, and the International Code Council—submitted the forms electronically. But the IRS converts all 990s, no matter how they were filed, into images, rendering them useless for digital operations like searching multiple forms for information.
That means watchdog groups and those that provide information on charities, like Charity Navigator, GuideStar, and the Urban Institute, have to spend money to manually enter the data they get from the IRS before making it available to the public, even if it has previously been digitized.
The lawsuit against the IRS, filed by Public.Resource.Org, aims to end that practice.
Carl Malamud, who heads the group, is a longtime activist who successfully pushed the Securities and Exchange Commission to post corporate filings free online in the 1990s, among other projects.
He wants to do the same with the IRS, arguing that data should be readily available at no cost about a sector that represents more than 1.5 million tax-exempt organizations and more than $1.5-trillion in revenue.
Why Statistically Significant Studies Aren’t Necessarily Significant
Michael White in PSMagazine on how modern statistics have made it easier than ever for us to fool ourselves: “Scientific results often defy common sense. Sometimes this is because science deals with phenomena that occur on scales we don’t experience directly, like evolution over billions of years or molecules that span billionths of meters. Even when it comes to things that happen on scales we’re familiar with, scientists often draw counter-intuitive conclusions from subtle patterns in the data. Because these patterns are not obvious, researchers rely on statistics to distinguish the signal from the noise. Without the aid of statistics, it would be difficult to convincingly show that smoking causes cancer, that drugged bees can still find their way home, that hurricanes with female names are deadlier than ones with male names, or that some people have a precognitive sense for porn.
OK, very few scientists accept the existence of precognition. But Cornell psychologist Daryl Bem’s widely reported porn precognition study illustrates the thorny relationship between science, statistics, and common sense. While many criticisms were leveled against Bem’s study, in the end it became clear that the study did not suffer from an obvious killer flaw. If it hadn’t dealt with the paranormal, it’s unlikely that Bem’s work would have drawn much criticism. As one psychologist put it after explaining how the study went wrong, “I think Bem’s actually been relatively careful. The thing to remember is that this type of fudging isn’t unusual; to the contrary, it’s rampant–everyone does it. And that’s because it’s very difficult, and often outright impossible, to avoid.”…
That you can lie with statistics is well known; what is less commonly noted is how much scientists still struggle to define proper statistical procedures for handling the noisy data we collect in the real world. In an exchange published last month in the Proceedings of the National Academy of Sciences, statisticians argued over how to address the problem of false positive results, statistically significant findings that on further investigation don’t hold up. Non-reproducible results in science are a growing concern; so do researchers need to change their approach to statistics?
Valen Johnson, at Texas A&M University, argued that the commonly used threshold for statistical significance isn’t as stringent as scientists think it is, and therefore researchers should adopt a tighter threshold to better filter out spurious results. In reply, statisticians Andrew Gelman and Christian Robert argued that tighter thresholds won’t solve the problem; they simply “dodge the essential nature of any such rule, which is that it expresses a tradeoff between the risks of publishing misleading results and of important results being left unpublished.” The acceptable level of statistical significance should vary with the nature of the study. Another team of statisticians raised a similar point, arguing that a more stringent significance threshold would exacerbate the worrying publishing bias against negative results. Ultimately, good statistical decision making “depends on the magnitude of effects, the plausibility of scientific explanations of the mechanism, and the reproducibility of the findings by others.”
However, arguments over statistics usually occur because it is not always obvious how to make good statistical decisions. Some bad decisions are clear. As xkcd’s Randall Munroe illustrated in his comic on the spurious link between green jelly beans and acne, most people understand that if you keep testing slightly different versions of a hypothesis on the same set of data, sooner or later you’re likely to get a statistically significant result just by chance. This kind of statistical malpractice is called fishing or p-hacking, and most scientists know how to avoid it.
But there are more subtle forms of the problem that pervade the scientific literature. In an unpublished paper (PDF), statisticians Andrew Gelman, at Columbia University, and Eric Loken, at Penn State, argue that researchers who deliberately avoid p-hacking still unknowingly engage in a similar practice. The problem is that one scientific hypothesis can be translated into many different statistical hypotheses, with many chances for a spuriously significant result. After looking at their data, researchers decide which statistical hypothesis to test, but that decision is skewed by the data itself.
To see how this might happen, imagine a study designed to test the idea that green jellybeans cause acne. There are many ways the results could come out statistically significant in favor of the researchers’ hypothesis. Green jellybeans could cause acne in men, but not in women, or in women but not men. The results may be statistically significant if the jellybeans you call “green” include Lemon Lime, Kiwi, and Margarita but not Sour Apple. Gelman and Loken write that “researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances.” In the end, the researchers may explicitly test only one or a few statistical hypotheses, but their decision-making process has already biased them toward the hypotheses most likely to be supported by their data. The result is “a sort of machine for producing and publicizing random patterns.”
Gelman and Loken are not alone in their concern. Last year Daniele Fanelli, at the University of Edingburgh, and John Ioannidis, at Stanford University, reported that many U.S. studies, particularly in the social sciences, may overestimate the effect sizes of their results. “All scientists have to make choices throughout a research project, from formulating the question to submitting results for publication.” These choices can be swayed “consciously or unconsciously, by scientists’ own beliefs, expectations, and wishes, and the most basic scientific desire is that of producing an important research finding.”
What is the solution? Part of the answer is to not let measures of statistical significance override our common sense—not our naïve common sense, but our scientifically-informed common sense…”
Selected Readings on Crowdsourcing Tasks and Peer Production
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Technological advances are creating a new paradigm by which institutions and organizations are increasingly outsourcing tasks to an open community, allocating specific needs to a flexible, willing and dispersed workforce. “Microtasking” platforms like Amazon’s Mechanical Turk are a burgeoning source of income for individuals who contribute their time, skills and knowledge on a per-task basis. In parallel, citizen science projects – task-based initiatives in which citizens of any background can help contribute to scientific research – like Galaxy Zoo are demonstrating the ability of lay and expert citizens alike to make small, useful contributions to aid large, complex undertakings. As governing institutions seek to do more with less, looking to the success of citizen science and microtasking initiatives could provide a blueprint for engaging citizens to help accomplish difficult, time-consuming objectives at little cost. Moreover, the incredible success of peer-production projects – best exemplified by Wikipedia – instills optimism regarding the public’s willingness and ability to complete relatively small tasks that feed into a greater whole and benefit the public good. You can learn more about this new wave of “collective intelligence” by following the MIT Center for Collective Intelligence and their annual Collective Intelligence Conference.
Selected Reading List (in alphabetical order)
- Yochai Benkler — The Wealth of Networks: How Social Production Transforms Markets and Freedom — a book on the ways commons-based peer-production is transforming modern society.
- Daren C. Brabham — Using Crowdsourcing in Government — a report describing the diversity of methods crowdsourcing could be greater utilized by governments, including through the leveraging of micro-tasking platforms.
- Kevin J. Boudreau, Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley – From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers – a working paper exploring the conditions,
- including incentives, that affect online collaboration.
- Chiara Franzoni and Henry Sauermann — Crowd Science: The Organization of Scientific Research in Open Collaborative Projects — a paper describing the potential advantages of deploying crowd science in a variety of contexts.
- Aniket Kittur, Ed H. Chi and Bongwon Suh — Crowdsourcing User Studies with Mechanical Turk — a paper proposing potential benefits beyond simple task completion for microtasking platforms like Mechanical Turk.
- Aniket Kittur, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton — The Future of Crowd Work — a paper describing the promise of increased and evolved crowd work’s effects on the global economy.
- Michael J. Madison — Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo — an in-depth case study of the Galaxy Zoo containing insights regarding the importance of clear objectives and institutional and/or professional collaboration in citizen science initiatives.
- Thomas W. Malone, Robert Laubacher and Chrysanthos Dellarocas – Harnessing Crowds: Mapping the Genome of Collective Intelligence – an article proposing a framework for understanding collective intelligence efforts.
- Geoff Mulgan – True Collective Intelligence? A Sketch of a Possible New Field – a paper proposing theoretical building blocks and an experimental and research agenda around the field of collective intelligence.
- Henry Sauermann and Chiara Franzoni – Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation – a paper exploring the role of interest-based motivation in collaborative knowledge production.
- Catherine E. Schmitt-Sands and Richard J. Smith – Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk – an article describing an experiment using Mechanical Turk to crowdsource public policy research microtasks.
- Clay Shirky — Here Comes Everybody: The Power of Organizing Without Organizations — a book exploring the ways largely unstructured collaboration is remaking practically all sectors of modern life.
- Jonathan Silvertown — A New Dawn for Citizen Science — a paper examining the diverse factors influencing the emerging paradigm of “science by the people.”
- Katarzyna Szkuta, Roberto Pizzicannella, David Osimo – Collaborative approaches to public sector innovation: A scoping study – an article studying success factors and incentives around the collaborative delivery of online public services.
Annotated Selected Reading List (in alphabetical order)
Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006. http://bit.ly/1aaU7Yb.
- In this book, Benkler “describes how patterns of information, knowledge, and cultural production are changing – and shows that the way information and knowledge are made available can either limit or enlarge the ways people can create and express themselves.”
- In his discussion on Wikipedia – one of many paradigmatic examples of people collaborating without financial reward – he calls attention to the notable ongoing cooperation taking place among a diversity of individuals. He argues that, “The important point is that Wikipedia requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in…”
Brabham, Daren C. Using Crowdsourcing in Government. Collaborating Across Boundaries Series. IBM Center for The Business of Government, 2013. http://bit.ly/17gzBTA.
- In this report, Brabham categorizes government crowdsourcing cases into a “four-part, problem-based typology, encouraging government leaders and public administrators to consider these open problem-solving techniques as a way to engage the public and tackle difficult policy and administrative tasks more effectively and efficiently using online communities.”
- The proposed four-part typology describes the following types of crowdsourcing in government:
- Knowledge Discovery and Management
- Distributed Human Intelligence Tasking
- Broadcast Search
- Peer-Vetted Creative Production
- In his discussion on Distributed Human Intelligence Tasking, Brabham argues that Amazon’s Mechanical Turk and other microtasking platforms could be useful in a number of governance scenarios, including:
- Governments and scholars transcribing historical document scans
- Public health departments translating health campaign materials into foreign languages to benefit constituents who do not speak the native language
- Governments translating tax documents, school enrollment and immunization brochures, and other important materials into minority languages
- Helping governments predict citizens’ behavior, “such as for predicting their use of public transit or other services or for predicting behaviors that could inform public health practitioners and environmental policy makers”
Boudreau, Kevin J., Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley. “From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 14-060. January 23, 2014. https://bit.ly/2QVmGUu.
- In this working paper, the authors explore the “conditions necessary for eliciting effort from those affecting the quality of interdependent teamwork” and “consider the the role of incentives versus social processes in catalyzing collaboration.”
- The paper’s findings are based on an experiment involving 260 individuals randomly assigned to 52 teams working toward solutions to a complex problem.
- The authors determined the level of effort in such collaborative undertakings are sensitive to cash incentives. However, collaboration among teams was driven more by the active participation of teammates, rather than any monetary reward.
Franzoni, Chiara, and Henry Sauermann. “Crowd Science: The Organization of Scientific Research in Open Collaborative Projects.” Research Policy (August 14, 2013). http://bit.ly/HihFyj.
- In this paper, the authors explore the concept of crowd science, which they define based on two important features: “participation in a project is open to a wide base of potential contributors, and intermediate inputs such as data or problem solving algorithms are made openly available.” The rationale for their study and conceptual framework is the “growing attention from the scientific community, but also policy makers, funding agencies and managers who seek to evaluate its potential benefits and challenges. Based on the experiences of early crowd science projects, the opportunities are considerable.”
- Based on the study of a number of crowd science projects – including governance-related initiatives like Patients Like Me – the authors identify a number of potential benefits in the following categories:
- Knowledge-related benefits
- Benefits from open participation
- Benefits from the open disclosure of intermediate inputs
- Motivational benefits
- The authors also identify a number of challenges:
- Organizational challenges
- Matching projects and people
- Division of labor and integration of contributions
- Project leadership
- Motivational challenges
- Sustaining contributor involvement
- Supporting a broader set of motivations
- Reconciling conflicting motivations
Kittur, Aniket, Ed H. Chi, and Bongwon Suh. “Crowdsourcing User Studies with Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453–456. CHI ’08. New York, NY, USA: ACM, 2008. http://bit.ly/1a3Op48.
- In this paper, the authors examine “[m]icro-task markets, such as Amazon’s Mechanical Turk, [which] offer a potential paradigm for engaging a large number of users for low time and monetary costs. [They] investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks.”
- The authors conclude that in addition to providing a means for crowdsourcing small, clearly defined, often non-skill-intensive tasks, “Micro-task markets such as Amazon’s Mechanical Turk are promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.”
Kittur, Aniket, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. “The Future of Crowd Work.” In 16th ACM Conference on Computer Supported Cooperative Work (CSCW 2013), 2012. http://bit.ly/1c1GJD3.
- In this paper, the authors discuss paid crowd work, which “offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale.” However, they caution that, “it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework.”
- The authors argue that seven key challenges must be met to ensure that crowd work processes evolve and reach their full potential:
- Designing workflows
- Assigning tasks
- Supporting hierarchical structure
- Enabling real-time crowd work
- Supporting synchronous collaboration
- Controlling quality
Madison, Michael J. “Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo.” In Convening Cultural Commons, 2013. http://bit.ly/1ih9Xzm.
- This paper explores a “case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place, Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis on the Internet. In the second place, Galaxy Zoo is a highly successful example of peer production, some times known as crowdsourcing…In the third place, is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies.”
- Madison concludes that the success of Galaxy Zoo has not been the result of the “character of its information resources (scientific data) and rules regarding their usage,” but rather, the fact that the “community was guided from the outset by a vision of a specific organizational solution to a specific research problem in astronomy, initiated and governed, over time, by professional astronomers in collaboration with their expanding universe of volunteers.”
Malone, Thomas W., Robert Laubacher and Chrysanthos Dellarocas. “Harnessing Crowds: Mapping the Genome of Collective Intelligence.” MIT Sloan Research Paper. February 3, 2009. https://bit.ly/2SPjxTP.
- In this article, the authors describe and map the phenomenon of collective intelligence – also referred to as “radical decentralization, crowd-sourcing, wisdom of crowds, peer production, and wikinomics – which they broadly define as “groups of individuals doing things collectively that seem intelligent.”
- The article is derived from the authors’ work at MIT’s Center for Collective Intelligence, where they gathered nearly 250 examples of Web-enabled collective intelligence. To map the building blocks or “genes” of collective intelligence, the authors used two pairs of related questions:
- Who is performing the task? Why are they doing it?
- What is being accomplished? How is it being done?
- The authors concede that much work remains to be done “to identify all the different genes for collective intelligence, the conditions under which these genes are useful, and the constraints governing how they can be combined,” but they believe that their framework provides a useful start and gives managers and other institutional decisionmakers looking to take advantage of collective intelligence activities the ability to “systematically consider many possible combinations of answers to questions about Who, Why, What, and How.”
Mulgan, Geoff. “True Collective Intelligence? A Sketch of a Possible New Field.” Philosophy & Technology 27, no. 1. March 2014. http://bit.ly/1p3YSdd.
- In this paper, Mulgan explores the concept of a collective intelligence, a “much talked about but…very underdeveloped” field.
- With a particular focus on health knowledge, Mulgan “sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to possible intellectual barriers to progress.”
- He concludes that the “central message that comes from observing real intelligence is that intelligence has to be for something,” and that “turning this simple insight – the stuff of so many science fiction stories – into new theories, new technologies and new applications looks set to be one of the most exciting prospects of the next few years and may help give shape to a new discipline that helps us to be collectively intelligent about our own collective intelligence.”
Sauermann, Henry and Chiara Franzoni. “Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation.” SSRN Working Papers Series. November 28, 2013. http://bit.ly/1o6YB7f.
- In this paper, Sauremann and Franzoni explore the issue of interest-based motivation in crowd-based knowledge production – in particular the use of the crowd science platform Zooniverse – by drawing on “research in psychology to discuss important static and dynamic features of interest and deriv[ing] a number of research questions.”
- The authors find that interest-based motivation is often tied to a “particular object (e.g., task, project, topic)” not based on a “general trait of the person or a general characteristic of the object.” As such, they find that “most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return.”
- They conclude that “interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production…However, both the scope and sustainability of this interest appear to be rather limited for the large majority of contributors…At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish.”
Schmitt-Sands, Catherine E. and Richard J. Smith. “Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk.” SSRN Working Papers Series. January 9, 2014. http://bit.ly/1ugaYja.
- In this paper, the authors describe an experiment involving the nascent use of Amazon’s Mechanical Turk as a social science research tool. “While researchers have used crowdsourcing to find research subjects or classify texts, [they] used Mechanical Turk to conduct a policy scan of local government websites.”
- Schmitt-Sands and Smith found that “crowdsourcing worked well for conducting an online policy program and scan.” The microtasked workers were helpful in screening out local governments that either did not have websites or did not have the types of policies and services for which the researchers were looking. However, “if the task is complicated such that it requires ongoing supervision, then crowdsourcing is not the best solution.”
Shirky, Clay. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press, 2008. https://bit.ly/2QysNif.
- In this book, Shirky explores our current era in which, “For the first time in history, the tools for cooperating on a global scale are not solely in the hands of governments or institutions. The spread of the Internet and mobile phones are changing how people come together and get things done.”
- Discussing Wikipedia’s “spontaneous division of labor,” Shirky argues that the process is like, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”
Silvertown, Jonathan. “A New Dawn for Citizen Science.” Trends in Ecology & Evolution 24, no. 9 (September 2009): 467–471. http://bit.ly/1iha6CR.
- This article discusses the move from “Science for the people,” a slogan adopted by activists in the 1970s to “’Science by the people,’ which is “a more inclusive aim, and is becoming a distinctly 21st century phenomenon.”
- Silvertown identifies three factors that are responsible for the explosion of activity in citizen science, each of which could be similarly related to the crowdsourcing of skills by governing institutions:
- “First is the existence of easily available technical tools for disseminating information about products and gathering data from the public.
- A second factor driving the growth of citizen science is the increasing realisation among professional scientists that the public represent a free source of labour, skills, computational power and even finance.
- Third, citizen science is likely to benefit from the condition that research funders such as the National Science Foundation in the USA and the Natural Environment Research Council in the UK now impose upon every grantholder to undertake project-related science outreach. This is outreach as a form of public accountability.”
Szkuta, Katarzyna, Roberto Pizzicannella, David Osimo. “Collaborative approaches to public sector innovation: A scoping study.” Telecommunications Policy. 2014. http://bit.ly/1oBg9GY.
- In this article, the authors explore cases where government collaboratively delivers online public services, with a focus on success factors and “incentives for services providers, citizens as users and public administration.”
- The authors focus on six types of collaborative governance projects:
- Services initiated by government built on government data;
- Services initiated by government and making use of citizens’ data;
- Services initiated by civil society built on open government data;
- Collaborative e-government services; and
- Services run by civil society and based on citizen data.
- The cases explored “are all designed in the way that effectively harnesses the citizens’ potential. Services susceptible to collaboration are those that require computing efforts, i.e. many non-complicated tasks (e.g. citizen science projects – Zooniverse) or citizens’ free time in general (e.g. time banks). Those services also profit from unique citizens’ skills and their propensity to share their competencies.”
OSTP’s Own Open Government Plan
: “The White House Office of Science and Technology Policy (OSTP) today released its 2014 Open Government Plan. The OSTP plan highlights three flagship efforts as well as the team’s ongoing work to embed the open government principles of transparency, participation, and collaboration into its activities.
OSTP advises the President on the effects of science and technology on domestic and international affairs. The work of the office includes policy efforts encompassing science, environment, energy, national security, technology, and innovation. This plan builds off of the 2010 and 2012 Open Government Plans, updating progress on past initiatives and adding new subject areas based on 2014 guidance.
Agencies began releasing biennial Open Government Plans in 2010, with direction from the 2009 Open Government Directive. These plans serve as a roadmap for agency openness efforts, explaining existing practices and announcing new endeavors to be completed over the coming two years. Agencies build these plans in consultation with civil society stakeholders and the general public. Open government is a vital component of the President’s Management Agenda and our overall effort to ensure the government is expanding economic growth and opportunity for all Americans.
OSTP’s 2014 flagship efforts include:
- Access to Scientific Collections: OSTP is leading agencies in developing policies that will improve the management of and access to scientific collections that agencies own or support. Scientific collections are assemblies of physical objects that are valuable for research and education—including drilling cores from the ocean floor and glaciers, seeds, space rocks, cells, mineral samples, fossils, and more. Agency policies will help make scientific collections and information about scientific collections more transparent and accessible in the coming years.
- We the Geeks: We the Geeks Google+ Hangouts feature informal conversations with experts to highlight the future of science, technology, and innovation in the United States. Participants can join the conversation on Twitter by using the hashtag #WeTheGeeks and asking questions of the presenters throughout the hangout.
- “All Hands on Deck” on STEM Education: OSTP is helping lead President Obama’s commitment to an “all-hands-on-deck approach” to providing students with skills they need to excel in science, technology, engineering, and math (STEM). In support of this goal, OSTP is bringing together government, industry, non-profits, philanthropy, and others to expand STEM education engagement and awareness through events like the annual White House Science Fair and the upcoming White House Maker Faire.
OSTP looks forward to implementing the 2014 Open Government Plan over the coming two years to continue building on its strong tradition of transparency, participation, and collaboration—with and for the American people.”