Selected Readings on Crowdsourcing Expertise


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.

Crowdsourcing enables leaders and citizens to work together to solve public problems in new and innovative ways. New tools and platforms enable citizens with differing levels of knowledge, expertise, experience and abilities to collaborate and solve problems together. Identifying experts, or individuals with specialized skills, knowledge or abilities with regard to a specific topic, and incentivizing their participation in crowdsourcing information, knowledge or experience to achieve a shared goal can enhance the efficiency and effectiveness of problem solving.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Börner, Katy, Michael Conlon, Jon Corson-Rikert, and Ying Ding. “VIVO: A Semantic Approach to Scholarly Networking and Discovery.” Synthesis Lectures on the Semantic Web: Theory and Technology 2, no. 1 (October 17, 2012): 1–178. http://bit.ly/17huggT.

  • This e-book “provides an introduction to VIVO…a tool for representing information about research and researchers — their scholarly works, research interests, and organizational relationships.”
  • VIVO is a response to the fact that, “Information for scholars — and about scholarly activity — has not kept pace with the increasing demands and expectations. Information remains siloed in legacy systems and behind various access controls that must be licensed or otherwise negotiated before access. Information representation is in its infancy. The raw material of scholarship — the data and information regarding previous work — is not available in common formats with common semantics.”
  • Providing access to structured information on the work and experience of a diversity of scholars enables improved expert finding — “identifying and engaging experts whose scholarly works is of value to one’s own. To find experts, one needs rich data regarding one’s own work and the work of potential related experts. The authors argue that expert finding is of increasing importance since, “[m]ulti-disciplinary and inter-disciplinary investigation is increasingly required to address complex problems. 

Bozzon, Alessandro, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci. “Choosing the Right Crowd: Expert Finding in Social Networks.” In Proceedings of the 16th International Conference on Extending Database Technology, 637–648. EDBT  ’13. New York, NY, USA: ACM, 2013. http://bit.ly/18QbtY5.

  • This paper explores the challenge of selecting experts within the population of social networks by considering the following problem: “given an expertise need (expressed for instance as a natural language query) and a set of social network members, who are the most knowledgeable people for addressing that need?”
  • The authors come to the following conclusions:
    • “profile information is generally less effective than information about resources that they directly create, own or annotate;
    • resources which are produced by others (resources appearing on the person’s Facebook wall or produced by people that she follows on Twitter) help increasing the assessment precision;
    • Twitter appears the most effective social network for expertise matching, as it very frequently outperforms all other social networks (either combined or alone);
    • Twitter appears as well very effective for matching expertise in domains such as computer engineering, science, sport, and technology & games, but Facebook is also very effective in fields such as locations, music, sport, and movies & tv;
    • surprisingly, LinkedIn appears less effective than other social networks in all domains (including computer science) and overall.”

Brabham, Daren C. “The Myth of Amateur Crowds.” Information, Communication & Society 15, no. 3 (2012): 394–410. http://bit.ly/1hdnGJV.

  • Unlike most of the related literature, this paper focuses on bringing attention to the expertise already being tapped by crowdsourcing efforts rather than determining ways to identify more dormant expertise to improve the results of crowdsourcing.
  • Brabham comes to two central conclusions: “(1) crowdsourcing is discussed in the popular press as a process driven by amateurs and hobbyists, yet empirical research on crowdsourcing indicates that crowds are largely self-selected professionals and experts who opt-in to crowdsourcing arrangements; and (2) the myth of the amateur in crowdsourcing ventures works to label crowds as mere hobbyists who see crowdsourcing ventures as opportunities for creative expression, as entertainment, or as opportunities to pass the time when bored. This amateur/hobbyist label then undermines the fact that large amounts of real work and expert knowledge are exerted by crowds for relatively little reward and to serve the profit motives of companies. 

Dutton, William H. Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government. One of a Series of Occasional Papers in Science and Technology Policy, Science and Technology Policy Institute, Institute for Defense Analyses, February 23, 2011. http://bit.ly/1c1bpEB.

  • In this paper, a case is made for more structured and well-managed crowdsourcing efforts within government. Specifically, the paper “explains how collaborative networking can be used to harness the distributed expertise of citizens, as distinguished from citizen consultation, which seeks to engage citizens — each on an equal footing.” Instead of looking for answers from an undefined crowd, Dutton proposes “networking the public as advisors” by seeking to “involve experts on particular public issues and problems distributed anywhere in the world.”
  • Dutton argues that expert-based crowdsourcing can be successfully for government for a number of reasons:
    • Direct communication with a diversity of independent experts
    • The convening power of government
    • Compatibility with open government and open innovation
    • Synergy with citizen consultation
    • Building on experience with paid consultants
    • Speed and urgency
    • Centrality of documents to policy and practice.
  • He also proposes a nine-step process for government to foster bottom-up collaboration networks:
    • Do not reinvent the technology
    • Focus on activities, not the tools
    • Start small, but capable of scaling up
    • Modularize
    • Be open and flexible in finding and going to communities of experts
    • Do not concentrate on one approach to all problems
    • Cultivate the bottom-up development of multiple projects
    • Experience networking and collaborating — be a networked individual
    • Capture, reward, and publicize success.

Goel, Gagan, Afshin Nikzad and Adish Singla. “Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.” Under review by the International World Wide Web Conference (WWW). 2014. http://bit.ly/1qHBkdf

  • Combining the notions of crowdsourcing expertise and crowdsourcing tasks, this paper focuses on the challenge within platforms like Mechanical Turk related to intelligently matching tasks to workers.
  • The authors’ call for more strategic assignment of tasks in crowdsourcing markets is based on the understanding that “each worker has certain expertise and interests which define the set of tasks she can and is willing to do.”
  • Focusing on developing meaningful incentives based on varying levels of expertise, the authors sought to create a mechanism that, “i) is incentive compatible in the sense that it is truthful for agents to report their true cost, ii) picks a set of workers and assigns them to the tasks they are eligible for in order to maximize the utility of the requester, iii) makes sure total payments made to the workers doesn’t exceed the budget of the requester.

Gubanov, D., N. Korgin, D. Novikov and A. Kalkov. E-Expertise: Modern Collective Intelligence. Springer, Studies in Computational Intelligence 558, 2014. http://bit.ly/U1sxX7

  • In this book, the authors focus on “organization and mechanisms of expert decision-making support using modern information and communication technologies, as well as information analysis and collective intelligence technologies (electronic expertise or simply e-expertise).”
  • The book, which “addresses a wide range of readers interested in management, decision-making and expert activity in political, economic, social and industrial spheres, is broken into five chapters:
    • Chapter 1 (E-Expertise) discusses the role of e-expertise in decision-making processes. The procedures of e-expertise are classified, their benefits and shortcomings are identified, and the efficiency conditions are considered.
    • Chapter 2 (Expert Technologies and Principles) provides a comprehensive overview of modern expert technologies. A special emphasis is placed on the specifics of e-expertise. Moreover, the authors study the feasibility and reasonability of employing well-known methods and approaches in e-expertise.
    • Chapter 3 (E-Expertise: Organization and Technologies) describes some examples of up-to-date technologies to perform e-expertise.
    • Chapter 4 (Trust Networks and Competence Networks) deals with the problems of expert finding and grouping by information and communication technologies.
    • Chapter 5 (Active Expertise) treats the problem of expertise stability against any strategic manipulation by experts or coordinators pursuing individual goals.

Holst, Cathrine. “Expertise and Democracy.” ARENA Report No 1/14, Center for European Studies, University of Oslo. http://bit.ly/1nm3rh4

  • This report contains a set of 16 papers focused on the concept of “epistocracy,” meaning the “rule of knowers.” The papers inquire into the role of knowledge and expertise in modern democracies and especially in the European Union (EU). Major themes are: expert-rule and democratic legitimacy; the role of knowledge and expertise in EU governance; and the European Commission’s use of expertise.
    • Expert-rule and democratic legitimacy
      • Papers within this theme concentrate on issues such as the “implications of modern democracies’ knowledge and expertise dependence for political and democratic theory.” Topics include the accountability of experts, the legitimacy of expert arrangements within democracies, the role of evidence in policy-making, how expertise can be problematic in democratic contexts, and “ethical expertise” and its place in epistemic democracies.
    • The role of knowledge and expertise in EU governance
      • Papers within this theme concentrate on “general trends and developments in the EU with regard to the role of expertise and experts in political decision-making, the implications for the EU’s democratic legitimacy, and analytical strategies for studying expertise and democratic legitimacy in an EU context.”
    • The European Commission’s use of expertise
      • Papers within this theme concentrate on how the European Commission uses expertise and in particular the European Commission’s “expertgroup system.” Topics include the European Citizen’s Initiative, analytic-deliberative processes in EU food safety, the operation of EU environmental agencies, and the autonomy of various EU agencies.

King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. http://bit.ly/HjVOpi.

  • In this paper, King and Lakhani examine different methods for opening innovation, where, “[i]nstead of doing everything in-house, companies can tap into the ideas cloud of external expertise to develop new products and services.”
  • The three types of open innovation discussed are: opening the idea-creation process, competitions where prizes are offered and designers bid with possible solutions; opening the idea-selection process, ‘approval contests’ in which outsiders vote to determine which entries should be pursued; and opening both idea generation and selection, an option used especially by organizations focused on quickly changing needs.

Long, Chengjiang, Gang Hua and Ashish Kapoor. Active Visual Recognition with Expertise Estimation in Crowdsourcing. 2013 IEEE International Conference on Computer Vision. December 2013. http://bit.ly/1lRWFur.

  • This paper is focused on improving the crowdsourced labeling of visual datasets from platforms like Mechanical Turk. The authors note that, “Although it is cheap to obtain large quantity of labels through crowdsourcing, it has been well known that the collected labels could be very noisy. So it is desirable to model the expertise level of the labelers to ensure the quality of the labels. The higher the expertise level a labeler is at, the lower the label noises he/she will produce.”
  • Based on the need for identifying expert labelers upfront, the authors developed an “active classifier learning system which determines which users to label which unlabeled examples” from collected visual datasets.
  • The researchers’ experiments in identifying expert visual dataset labelers led to findings demonstrating that the “active selection” of expert labelers is beneficial in cutting through the noise of crowdsourcing platforms.

Noveck, Beth Simone. “’Peer to Patent’: Collective Intelligence, Open Review, and Patent Reform.” Harvard Journal of Law & Technology 20, no. 1 (Fall 2006): 123–162. http://bit.ly/HegzTT.

  • This law review article introduces the idea of crowdsourcing expertise to mitigate the challenge of patent processing. Noveck argues that, “access to information is the crux of the patent quality problem. Patent examiners currently make decisions about the grant of a patent that will shape an industry for a twenty-year period on the basis of a limited subset of available information. Examiners may neither consult the public, talk to experts, nor, in many cases, even use the Internet.”
  • Peer-to-Patent, which launched three years after this article, is based on the idea that, “The new generation of social software might not only make it easier to find friends but also to find expertise that can be applied to legal and policy decision-making. This way, we can improve upon the Constitutional promise to promote the progress of science and the useful arts in our democracy by ensuring that only worth ideas receive that ‘odious monopoly’ of which Thomas Jefferson complained.”

Ober, Josiah. “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment.” American Political Science Review 107, no. 01 (2013): 104–122. http://bit.ly/1cgf857.

  • In this paper, Ober argues that, “A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world.”
  • Ober describes an approach to decision-making that aggregates expertise across multiple domains. This “Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest.”

Sims, Max H., Jeffrey Bigham, Henry Kautz and Marc W. Halterman. Crowdsourcing medical expertise in near real time.” Journal of Hospital Medicine 9, no. 7, July 2014. http://bit.ly/1kAKvq7.

  • In this article, the authors discuss the develoment of a mobile application called DocCHIRP, which was developed due to the fact that, “although the Internet creates unprecedented access to information, gaps in the medical literature and inefficient searches often leave healthcare providers’ questions unanswered.”
  • The DocCHIRP pilot project used a “system of point-to-multipoint push notifications designed to help providers problem solve by crowdsourcing from their peers.”
  • Healthcare providers (HCPs) sought to gain intelligence from the crowd, which included 85 registered users, on questions related to medication, complex medical decision making, standard of care, administrative, testing and referrals.
  • The authors believe that, “if future iterations of the mobile crowdsourcing applications can address…adoption barriers and support the organic growth of the crowd of HCPs,” then “the approach could have a positive and transformative effect on how providers acquire relevant knowledge and care for patients.”

Spina, Alessandro. “Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies.” in Foundations of EU Food Law and Policy, eds. A. Alemmano and S. Gabbi. Ashgate, 2014. http://bit.ly/1k2EwdD.

  • In this paper, Spina “presents some reflections on how the collaborative and crowdsourcing practices of Open Government could be integrated in the activities of EFSA [European Food Safety Authority] and other EU agencies,” with a particular focus on “highlighting the benefits of the Open Government paradigm for expert regulatory bodies in the EU.”
  • Spina argues that the “crowdsourcing of expertise and the reconfiguration of the information flows between European agencies and teh public could represent a concrete possibility of modernising the role of agencies with a new model that has a low financial burden and an almost immediate effect on the legal governance of agencies.”
  • He concludes that, “It is becoming evident that in order to guarantee that the best scientific expertise is provided to EU institutions and citizens, EFSA should strive to use the best organisational models to source science and expertise.”

Meet the UK start-ups changing the world with open data


Sophie Curtis in The Telegraph: “Data is more accessible today than anyone could have imagined 10 or 20 years ago. From corporate databases to social media and embedded sensors, data is exploding, with total worldwide volume expected to reach 6.6 zettabytes by 2020.
Open data is information that is available for anyone to use, for any purpose, at no cost. For example, the Department for Education publishes open data about the performance of schools in England, so that companies can create league tables and citizens can find the best-performing schools in their catchment area.
Governments worldwide are working to open up more of their data. Since January 2010, more than 18,500 UK government data sets have been released via the data.gov.uk web portal, creating new opportunities for organisations to build innovative digital services.
Businesses are also starting to realise the value of making their non-personal data freely available, with open innovation leading to the creation products and services that they can benefit from….

Now a range of UK start-ups are working with the ODI to build businesses using open data, and have already unlocked a total of £2.5 million worth of investments and contracts.
Mastodon C joined the ODI start-up programme at its inception in December 2012. Shortly after joining, the company teamed up with Ben Goldacre and Open Healthcare UK, and embarked on a project investigating the use of branded statins over the far cheaper generic versions.
The data analysis identified potential efficiency savings to the NHS of £200 million. The company is now also working with the Technology Strategy Board and Nesta to help them gain better insight into their data.
Another start-up, CarbonCulture is a community platform designed to help people use resources more efficiently. The company uses high-tech metering to monitor carbon use in the workplace and help clients save money.
Organisations such as 10 Downing Street, Tate, Cardiff Council, the GLA and the UK Parliament are using the company’s digital tools to monitor and improve their energy consumption. CarbonCulture has also helped the Department of Energy and Climate Change reduce its gas use by 10 per cent.
Spend Network’s business is built on collecting the spend statements and tender documents published by government in the UK and Europe and then publishing this data openly so that anyone can use it. The company currently hosts over £1.2 trillion of transactions from the UK and over 1.8 million tenders from across Europe.
One of the company’s major breakthroughs was creating the first national, open spend analysis for central and local government. This was used to uncover a 45 per cent delay in the UK’s tendering process, holding up £22 billion of government funds to the economy.
Meanwhile, TransportAPI uses open data feeds from Traveline, Network Rail and Transport for London to provide nationwide timetables, departure and infrastructure information across all modes of public transport.
TransportAPI currently has 700 developers and organisations signed up to its platform, including individual taxpayers and public sector organisations like universities and local authorities. Travel portals, hyperlocal sites and business analytics are also integrating features, such as the ‘nearest transport’ widget, into their websites.
These are just four examples of how start-ups are using open data to create new digital services. The ODI this week announced seven new open data start-ups joining the programme, covering 3D printed learning materials, helping disabled communities, renewable energy markets, and smart cities….”

Urban Analytics (Updated and Expanded)


As part of an ongoing effort to build a knowledge base for the field of opening governance by organizing and disseminating its learnings, the GovLab Selected Readings series provides an annotated and curated collection of recommended works on key opening governance topics. In this edition, we explore the literature on Urban Analytics. To suggest additional readings on this or any other topic, please email biblio@thegovlab.org.

Data and its uses for Governance

Urban Analytics places better information in the hands of citizens as well as government officials to empower people to make more informed choices. Today, we are able to gather real-time information about traffic, pollution, noise, and environmental and safety conditions by culling data from a range of tools: from the low-cost sensors in mobile phones to more robust monitoring tools installed in our environment. With data collected and combined from the built, natural and human environments, we can develop more robust predictive models and use those models to make policy smarter.

With the computing power to transmit and store the data from these sensors, and the tools to translate raw data into meaningful visualizations, we can identify problems as they happen, design new strategies for city management, and target the application of scarce resources where they are most needed.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)
Amini, L., E. Bouillet, F. Calabrese, L. Gasparini, and O. Verscheure. “Challenges and Results in City-scale Sensing.” In IEEE Sensors, 59–61, 2011. http://bit.ly/1doodZm.

  • This paper examines “how city requirements map to research challenges in machine learning, optimization, control, visualization, and semantic analysis.”
  • The authors raises several research challenges including how to extract accurate information when the data is noisy and sparse; how to represent findings from digital pervasive technologies; and how people interact with one another and their environment.

Batty, M., K. W. Axhausen, F. Giannotti, A. Pozdnoukhov, A. Bazzani, M. Wachowicz, G. Ouzounis, and Y. Portugali. “Smart Cities of the Future.The European Physical Journal Special Topics 214, no. 1 (November 1, 2012): 481–518. http://bit.ly/HefbjZ.

  • This paper explores the goals and research challenges involved in the development of smart cities that merge ICT with traditional infrastructures through digital technologies.
  • The authors put forth several research objectives, including: 1) to explore the notion of the city as a laboratory for innovation; 2) to develop technologies that ensure equity, fairness and realize a better quality of city life; and 3) to develop technologies that ensure informed participation and create shared knowledge for democratic city governance.
  • The paper also examines several contemporary smart city initiatives, expected paradigm shifts in the field, benefits, risks and impacts.

Budde, Paul. “Smart Cities of Tomorrow.” In Cities for Smart Environmental and Energy Futures, edited by Stamatina Th Rassia and Panos M. Pardalos, 9–20. Energy Systems. Springer Berlin Heidelberg, 2014. http://bit.ly/17MqPZW.

  • This paper examines the components and strategies involved in the creation of smart cities featuring “cohesive and open telecommunication and software architecture.”
  • In their study of smart cities, the authors examine smart and renewable energy; next-generation networks; smart buildings; smart transport; and smart government.
  • They conclude that for the development of smart cities, information and communication technology (ICT) is needed to build more horizontal collaborative structures, useful data must be analyzed in real time and people and/or machines must be able to make instant decisions related to social and urban life.

Cardone, G., L. Foschini, P. Bellavista, A. Corradi, C. Borcea, M. Talasila, and R. Curtmola. “Fostering Participaction in Smart Cities: a Geo-social Crowdsensing Platform.” IEEE Communications
Magazine 51, no. 6 (2013): 112–119. http://bit.ly/17iJ0vZ.

  • This article examines “how and to what extent the power of collective although imprecise intelligence can be employed in smart cities.”
  • To tackle problems of managing the crowdsensing process, this article proposes a “crowdsensing platform with three main original technical aspects: an innovative geo-social model to profile users along different variables, such as time, location, social interaction, service usage, and human activities; a matching algorithm to autonomously choose people to involve in participActions and to quantify the performance of their sensing; and a new Android-based platform to collect sensing data from smart phones, automatically or with user help, and to deliver sensing/actuation tasks to users.”

Chen, Chien-Chu. “The Trend towards ‘Smart Cities.’” International Journal of Automation and Smart Technology. June 1, 2014. http://bit.ly/1jOOaAg.

  • In this study, Chen explores the ambitions, prevalence and outcomes of a variety of smart cities, organized into five categories:
    • Transportation-focused smart cities
    • Energy-focused smart cities
    • Building-focused smart cities
    • Water-resources-focused smart cities
    • Governance-focused smart cities
  • The study finds that the “Asia Pacific region accounts for the largest share of all smart city development plans worldwide, with 51% of the global total. Smart city development plans in the Asia Pacific region tend to be energy-focused smart city initiatives, aimed at easing the pressure on energy resources that will be caused by continuing rapid urbanization in the future.”
  • North America, on the other hand is generally more geared toward energy-focused smart city development plans. “In North America, there has been a major drive to introduce smart meters and smart electric power grids, integrating the electric power sector with information and communications technology (ICT) and replacing obsolete electric power infrastructure, so as to make cities’ electric power systems more reliable (which in turn can help to boost private-sector investment, stimulate the growth of the ‘green energy’ industry, and create more job opportunities).”
  • Looking to Taiwan as an example, Chen argues that, “Cities in different parts of the world face different problems and challenges when it comes to urban development, making it necessary to utilize technology applications from different fields to solve the unique problems that each individual city has to overcome; the emphasis here is on the development of customized solutions for smart city development.”

Domingo, A., B. Bellalta, M. Palacin, M. Oliver and E. Almirall. “Public Open Sensor Data: Revolutionizing Smart Cities.” Technology and Society Magazine, IEEE 32, No. 4. Winter 2013. http://bit.ly/1iH6ekU.

  • In this article, the authors explore the “enormous amount of information collected by sensor devices” that allows for “the automation of several real-time services to improve city management by using intelligent traffic-light patterns during rush hour, reducing water consumption in parks, or efficiently routing garbage collection trucks throughout the city.”
  • They argue that, “To achieve the goal of sharing and open data to the public, some technical expertise on the part of citizens will be required. A real environment – or platform – will be needed to achieve this goal.” They go on to introduce a variety of “technical challenges and considerations involved in building an Open Sensor Data platform,” including:
    • Scalability
    • Reliability
    • Low latency
    • Standardized formats
    • Standardized connectivity
  • The authors conclude that, despite incredible advancements in urban analytics and open sensing in recent years, “Today, we can only imagine the revolution in Open Data as an introduction to a real-time world mashup with temperature, humidity, CO2 emission, transport, tourism attractions, events, water and gas consumption, politics decisions, emergencies, etc., and all of this interacting with us to help improve the future decisions we make in our public and private lives.”

Harrison, C., B. Eckman, R. Hamilton, P. Hartswick, J. Kalagnanam, J. Paraszczak, and P. Williams. “Foundations for Smarter Cities.” IBM Journal of Research and Development 54, no. 4 (2010): 1–16. http://bit.ly/1iha6CR.

  • This paper describes the information technology (IT) foundation and principles for Smarter Cities.
  • The authors introduce three foundational concepts of smarter cities: instrumented, interconnected and intelligent.
  • They also describe some of the major needs of contemporary cities, and concludes that Creating the Smarter City implies capturing and accelerating flows of information both vertically and horizontally.

Hernández-Muñoz, José M., Jesús Bernat Vercher, Luis Muñoz, José A. Galache, Mirko Presser, Luis A. Hernández Gómez, and Jan Pettersson. “Smart Cities at the Forefront of the Future Internet.” In The Future Internet, edited by John Domingue, Alex Galis, Anastasius Gavras, Theodore Zahariadis, Dave Lambert, Frances Cleary, Petros Daras, et al., 447–462. Lecture Notes in Computer Science 6656. Springer Berlin Heidelberg, 2011. http://bit.ly/HhNbMX.

  • This paper explores how the “Internet of Things (IoT) and Internet of Services (IoS), can become building blocks to progress towards a unified urban-scale ICT platform transforming a Smart City into an open innovation platform.”
  • The authors examine the SmartSantander project to argue that, “the different stakeholders involved in the smart city business is so big that many non-technical constraints must be considered (users, public administrations, vendors, etc.).”
  • The authors also discuss the need for infrastructures at the, for instance, European level for realistic large-scale experimentally-driven research.

Hoon-Lee, Jung, Marguerite Gong Hancock, Mei-Chih Hu. “Towards an effective framework for building smart cities: Lessons from Seoul and San Francisco.” Technological Forecasting and Social Change. Ocotober 3, 2013. http://bit.ly/1rzID5v.

  • In this study, the authors aim to “shed light on the process of building an effective smart city by integrating various practical perspectives with a consideration of smart city characteristics taken from the literature.”
  • They propose a conceptual framework based on case studies from Seoul and San Francisco built around the following dimensions:
    • Urban openness
    • Service innovation
    • Partnerships formation
    • Urban proactiveness
    • Smart city infrastructure integration
    • Smart city governance
  • The authors conclude with a summary of research findings featuring “8 stylized facts”:
    • Movement towards more interactive services engaging citizens;
    • Open data movement facilitates open innovation;
    • Diversifying service development: exploit or explore?
    • How to accelerate adoption: top-down public driven vs. bottom-up market driven partnerships;
    • Advanced intelligent technology supports new value-added smart city services;
    • Smart city services combined with robust incentive systems empower engagement;
    • Multiple device & network accessibility can create network effects for smart city services;
    • Centralized leadership implementing a comprehensive strategy boosts smart initiatives.

Kamel Boulos, Maged N. and Najeeb M. Al-Shorbaji. “On the Internet of Things, smart cities and the WHO Healthy Cities.” International Journal of Health Geographics 13, No. 10. 2014. http://bit.ly/Tkt9GA.

  • In this article, the authors give a “brief overview of the Internet of Things (IoT) for cities, offering examples of IoT-powered 21st century smart cities, including the experience of the Spanish city of Barcelona in implementing its own IoT-driven services to improve the quality of life of its people through measures that promote an eco-friendly, sustainable environment.”
  • The authors argue that one of the central needs for harnessing the power of the IoT and urban analytics is for cities to “involve and engage its stakeholders from a very early stage (city officials at all levels, as well as citizens), and to secure their support by raising awareness and educating them about smart city technologies, the associated benefits, and the likely challenges that will need to be overcome (such as privacy issues).”
  • They conclude that, “The Internet of Things is rapidly gaining a central place as key enabler of the smarter cities of today and the future. Such cities also stand better chances of becoming healthier cities.”

Keller, Sallie Ann, Steven E. Koonin, and Stephanie Shipp. “Big Data and City Living – What Can It Do for Us?Significance 9, no. 4 (2012): 4–7. http://bit.ly/166W3NP.

  • This article provides a short introduction to Big Data, its importance, and the ways in which it is transforming cities. After an overview of the social benefits of big data in an urban context, the article examines its challenges, such as privacy concerns and institutional barriers.
  • The authors recommend that new approaches to making data available for research are needed that do not violate the privacy of entities included in the datasets. They believe that balancing privacy and accessibility issues will require new government regulations and incentives.

Kitchin, Rob. “The Real-Time City? Big Data and Smart Urbanism.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, July 3, 2013. http://bit.ly/1aamZj2.

  • This paper focuses on “how cities are being instrumented with digital devices and infrastructure that produce ‘big data’ which enable real-time analysis of city life, new modes of technocratic urban governance, and a re-imagining of cities.”
  • The authors provide “a number of projects that seek to produce a real-time analysis of the city and provides a critical reflection on the implications of big data and smart urbanism.”

Mostashari, A., F. Arnold, M. Maurer, and J. Wade. “Citizens as Sensors: The Cognitive City Paradigm.” In 2011 8th International Conference Expo on Emerging Technologies for a Smarter World (CEWIT), 1–5, 2011. http://bit.ly/1fYe9an.

  • This paper argues that. “implementing sensor networks are a necessary but not sufficient approach to improving urban living.”
  • The authors introduce the concept of the “Cognitive City” – a city that can not only operate more efficiently due to networked architecture, but can also learn to improve its service conditions, by planning, deciding and acting on perceived conditions.
  • Based on this conceptualization of a smart city as a cognitive city, the authors propose “an architectural process approach that allows city decision-makers and service providers to integrate cognition into urban processes.”

Oliver, M., M. Palacin, A. Domingo, and V. Valls. “Sensor Information Fueling Open Data.” In Computer Software and Applications Conference Workshops (COMPSACW), 2012 IEEE 36th Annual, 116–121, 2012. http://bit.ly/HjV4jS.

  • This paper introduces the concept of sensor networks as a key component in the smart cities framework, and shows how real-time data provided by different city network sensors enrich Open Data portals and require a new architecture to deal with massive amounts of continuously flowing information.
  • The authors’ main conclusion is that by providing a framework to build new applications and services using public static and dynamic data that promote innovation, a real-time open sensor network data platform can have several positive effects for citizens.

Perera, Charith, Arkady Zaslavsky, Peter Christen and Dimitrios Georgakopoulos. “Sensing as a service model for smart cities supported by Internet of Things.” Transactions on Emerging Telecommunications Technologies 25, Issue 1. January 2014. http://bit.ly/1qJLDP9.

  • This paper looks into the “enormous pressure towards efficient city management” that has “triggered various Smart City initiatives by both government and private sector businesses to invest in information and communication technologies to find sustainable solutions to the growing issues.”
  • The authors explore the parallel advancement of the Internet of Things (IoT), which “envisions to connect billions of sensors to the Internet and expects to use them for efficient and effective resource management in Smart Cities.”
  • The paper proposes the sensing as a service model “as a solution based on IoT infrastructure.” The sensing as a service model consists of four conceptual layers: “(i) sensors and sensor owners; (ii) sensor publishers (SPs); (iii) extended service providers (ESPs); and (iv) sensor data consumers. They go on to describe how this model would work in the areas of waste management, smart agriculture and environmental management.

Privacy, Big Data, and the Public Good: Frameworks for Engagement. Edited by Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum; Cambridge University Press, 2014. http://bit.ly/UoGRca.

  • This book focuses on the legal, practical, and statistical approaches for maximizing the use of massive datasets while minimizing information risk.
  • “Big data” is more than a straightforward change in technology.  It poses deep challenges to our traditions of notice and consent as tools for managing privacy.  Because our new tools of data science can make it all but impossible to guarantee anonymity in the future, the authors question whether it possible to truly give informed consent, when we cannot, by definition, know what the risks are from revealing personal data either for individuals or for society as a whole.
  • Based on their experience building large data collections, authors discuss some of the best practical ways to provide access while protecting confidentiality.  What have we learned about effective engineered controls?  About effective access policies?  About designing data systems that reinforce – rather than counter – access policies?  They also explore the business, legal, and technical standards necessary for a new deal on data.
  • Since the data generating process or the data collection process is not necessarily well understood for big data streams, authors discuss what statistics can tell us about how to make greatest scientific use of this data. They also explore the shortcomings of current disclosure limitation approaches and whether we can quantify the extent of privacy loss.

Schaffers, Hans, Nicos Komninos, Marc Pallot, Brigitte Trousse, Michael Nilsson, and Alvaro Oliveira. “Smart Cities and the Future Internet: Towards Cooperation Frameworks for Open Innovation.” In The Future Internet, edited by John Domingue, Alex Galis, Anastasius Gavras, Theodore Zahariadis, Dave Lambert, Frances Cleary, Petros Daras, et al., 431–446. Lecture Notes in Computer Science 6656. Springer Berlin Heidelberg, 2011. http://bit.ly/16ytKoT.

  • This paper “explores ‘smart cities’ as environments of open and user-driven innovation for experimenting and validating Future Internet-enabled services.”
  • The authors examine several smart city projects to illustrate the central role of users in defining smart services and the importance of participation. They argue that, “Two different layers of collaboration can be distinguished. The first layer is collaboration within the innovation process. The second layer concerns collaboration at the territorial level, driven by urban and regional development policies aiming at strengthening the urban innovation systems through creating effective conditions for sustainable innovation.”

Suciu, G., A. Vulpe, S. Halunga, O. Fratu, G. Todoran, and V. Suciu. “Smart Cities Built on Resilient Cloud Computing and Secure Internet of Things.” In 2013 19th International Conference on Control Systems and Computer Science (CSCS), 513–518, 2013. http://bit.ly/16wfNgv.

  • This paper proposes “a new platform for using cloud computing capacities for provision and support of ubiquitous connectivity and real-time applications and services for smart cities’ needs.”
  • The authors present a “framework for data procured from highly distributed, heterogeneous, decentralized, real and virtual devices (sensors, actuators, smart devices) that can be automatically managed, analyzed and controlled by distributed cloud-based services.”

Townsend, Anthony. Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia. W. W. Norton & Company, 2013.

  • In this book, Townsend illustrates how “cities worldwide are deploying technology to address both the timeless challenges of government and the mounting problems posed by human settlements of previously unimaginable size and complexity.”
  • He also considers “the motivations, aspirations, and shortcomings” of the many stakeholders involved in the development of smart cities, and poses a new civics to guide these efforts.
  • He argues that smart cities are not made smart by various, soon-to-be-obsolete technologies built into its infrastructure, but how citizens use these ever-changing technologies to be “human-centered, inclusive and resilient.”

To stay current on recent writings and developments on Urban Analytics, please subscribe to the GovLab Digest.
Did we miss anything? Please submit reading recommendations to biblio@thegovlab.org or in the comments below.

How Crowdsourced Astrophotographs on the Web Are Revolutionizing Astronomy


Emerging Technology From the arXiv: “Astrophotography is currently undergoing a revolution thanks to the increased availability of high quality digital cameras and the software available to process the pictures after they have been taken.
Since photographs of the night sky are almost always better with long exposures that capture more light, this processing usually involves combining several images of the same part of the sky to produce one with a much longer effective exposure.
That’s all straightforward if you’ve taken the pictures yourself with the same gear under the same circumstances. But astronomers want to do better.
“The astrophotography group on Flickr alone has over 68,000 images,” say Dustin Lang at Carnegie Mellon University in Pittsburgh and a couple of pals. These and other images represent a vast source of untapped data for astronomers.
The problem is that it’s hard to combine images accurately when little is known about how they were taken. Astronomers take great care to use imaging equipment in which the pixels produce a signal that is proportional to the number of photons that hit.
But the same cannot be said of the digital cameras widely used by amateurs. All kinds of processes can end up influencing the final image.
So any algorithm that combines them has to cope with these variations. “We want to do this without having to infer the (possibly highly nonlinear) processing that has been applied to each individual image, each of which has been wrecked in its own loving way by its creator,” say Lang and co.
Now, these guys say they’ve cracked it. They’ve developed a system that automatically combines images from the same part of the sky to increase the effective exposure time of the resulting picture. And they say the combined images can rival those from much professional telescopes.
They’ve tested this approach by downloading images of two well-known astrophysical objects: the NGC 5907 Galaxy and the colliding pair of galaxies—Messier 51a and 51b.
For NGC 5907, they ended up with 4,000 images from Flickr, 1,000 from Bing and 100 from Google. They used an online system called astrometry.net that automatically aligns and registers images of the night sky and then combined the images using their new algorithm, which they call Enhance.
The results are impressive. They say that the combined images of NGC5907 (bottom three images) show some of the same faint features that revealed a single image taken over 11 hours of exposure using a 50 cm telescope (the top left image). All the images reveal the same kind of fine detail such as a faint stellar stream around the galaxy.
The combined image for the M51 galaxies is just as impressive, taking only 40 minutes to produce on a single processor. It reveals extended structures around both galaxies, which astronomers know to be debris from their gravitational interaction as they collide.
Lang and co say these faint features are hugely important because they allow astronomers to measure the age, mass ratios, and orbital configurations of the galaxies involved. Interestingly, many of these faint features are not visible in any of the input images taken from the Web. They emerge only once images have been combined.
One potential problem with algorithms like this is that they need to perform well as the number of images they combine increases. It’s no good if they grind to a halt as soon as a substantial amount of data becomes available.
On this score, Lang and co say astronomers can rest easy. The performance of their new Enhance algorithm scales linearly with the number of images it has to combine. That means it should perform well on large datasets.
The bottom line is that this kind of crowd-sourced astronomy has the potential to make a big impact, given that the resulting images rival those from large telescopes.
And it could also be used for historical images, say Lang and co. The Harvard Plate Archives, for example, contain half a million images dating back to the 1880s. These were all taken using different emulsions, with different exposures and developed using different processes. So the plates all have different responses to light, making them hard to compare.
That’s exactly the problem that Lang and co have solved for digital images on the Web. So it’s not hard to imagine how they could easily combine the data from the Harvard archives as well….”
Ref: arxiv.org/abs/1406.1528 : Towards building a Crowd-Sourced Sky Map

Government, Foundations Turn to Cash Prizes to Generate Solutions


Megan O’Neil at the Chronicle of Philanthropy: “Government agencies and philanthropic organizations are increasingly staging competitions as a way generate interest in solving difficult technological, social, and environmental problems, according to a new report.
“The Craft of Prize Design: Lessons From the Public Sector” found that well-designed competitions backed by cash incentives can help organizations attract new ideas, mobilize action, and stimulate markets.
“Incentive prizes have transformed from an exotic open innovation to a proven innovation strategy for the public, private, and philanthropic sectors,” the report says.
Produced by Deloitte Consulting’s innovation practice, the report was financially supported by Bloomberg Philanthropies and the Case; Joyce; John S. and James L. Knight; Kresge; and Rockefeller foundations.
The federal government has staged more than 350 prize competitions during the past five years to stimulate innovation and crowdsource solutions, according to the report. And philanthropic organizations are also fronting prizes for competitions promoting innovative responses to questions such as how to strengthen communities and encourage sustainable energy consumption.
One example cited by the report is the Talent Dividend Prize, sponsored by CEOs for Cities and the Kresge Foundation, which awards $1-million to the city that most increases its college graduation rate during a four-year period. A second example is the MIT Clean Energy Prize, co-sponsored by the U.S. Department of Energy, which offered a total of $1 million in prize money. Submissions generated $85 million in capital and research grants, according to the report.
A prize-based project should not be adopted when an established approach to solve a problem already exists or if potential participants don’t have the interest or time to work on solving a problem, the report concludes. Instead, prize designers must gauge the capacity of potential participants before announcing a prize, and make sure that it will spur the discovery of new solutions.”

Lessons in Mass Collaboration


Elizabeth Walker, Ryan Siegel, Todd Khozein, Nick Skytland, Ali Llewellyn, Thea Aldrich, and Michael Brennan in the Stanford Social Innovation Review: “significant advances in technology in the last two decades have opened possibilities to engage the masses in ways impossible to imagine centuries ago. Beyond coordination, today’s technological capability permits organizations to leverage and focus public interest, talent, and energy through mass collaborative engagement to better understand and solve today’s challenges. And given the rising public awareness of a variety of social, economic, and environmental problems, organizations have seized the opportunity to leverage and lead mass collaborations in the form of hackathons.
Hackathons emerged in the mid-2000s as a popular approach to leverage the expertise of large numbers of individuals to address social issues, often through the creation of online technological solutions. Having led hundreds of mass collaboration initiatives for organizations around the world in diverse cultural contexts, we at SecondMuse offer the following lessons as a starting point for others interested in engaging the masses, as well as challenges others’ may face.

What Mass Collaboration Looks Like

An early example of a mass collaborative endeavor was Random Hacks of Kindness (RHoK), which formed in 2009. RHoK was initially developed in collaboration with Google, Microsoft, Yahoo!, NASA, the World Bank, and later, HP as a volunteer mobilization effort; it aimed to build technology that would enable communities to respond better to crises such as natural disasters. In 2012, nearly 1,000 participants attended 30 events around the world to address 176 well-defined problems.
In 2013, NASA and SecondMuse led the International Space Apps Challenge, which engaged six US federal agencies, 400 partner institutions, and 9,000 global citizens through a variety of local and global team configurations; it aimed to address 58 different challenges to improve life on Earth and in space. In Athens, Greece, for example, in direct response to the challenge of creating a space-deployable greenhouse, a team developed a modular spinach greenhouse designed to survive the harsh Martian climate. Two months later, 11,000 citizens across 95 events participated in the National Day of Civic Hacking in 83 different US cities, ultimately contributing about 150,000 person-hours and addressing 31 federal and several state and local challenges over a single weekend. One result was Keep Austin Fed from Austin, Texas, which leveraged local data to coordinate food donations for those in need.
Strong interest on the part of institutions and an enthusiastic international community has paved the way for follow-up events in 2014.

Benefits of Mass Collaboration

The benefits of this approach to problem-solving are many, including:

  • Incentivizing the use of government data. As institutions push to make data available to the public, mass collaboration can increase the usefulness of that data by creating products from it, as well as inform and streamline future data collection processes.
  • Increasing transparency. Engaging citizens in the process of addressing public concerns educates them about the work that institutions do and advances efforts to meet public expectations of transparency.
  • Increasing outcome ownership. When people engage in a collaborative process of problem solving, they naturally have a greater stake in the outcome. Put simply, the more people who participate in the process, the greater the sense of community ownership. Also, when spearheading new policies or initiatives, the support of a knowledgeable community can be important to long-term success.
  • Increasing awareness. Engaging the populace in addressing challenges of public concern increases awareness of issues and helps develop an active citizenry. As a result, improved public perception and license to operate bolster governmental and non-governmental efforts to address challenges.
  • Saving money. By providing data and structures to the public, and allowing them to build and iterate on plans and prototypes, mass collaboration gives agencies a chance to harness the power of open innovation with minimal time and funds.
  • Harnessing cognitive surplus. The advent of online tools allowing for distributed collaboration enables citizens to use their free time incrementally toward collective endeavors that benefit local communities and the nation.

Challenges of Mass Collaboration

Although the benefits can be significant, agencies planning to lead mass collaborations should be aware of several challenges:

  • Investing time and effort. A mass collaboration is most effective when it is not a one-time event. The up-front investment in building a collaboration of supporting partner organizations, creating a robust framework for action, developing the necessary tools and defining the challenges, and investing in implementation and scaling of the most promising results all require substantial time to secure long-term commitment and strong relationships.
  • Forging an institution-community relationship. Throughout the course of most engagements, the power dynamic between the organization providing the frameworks and challenges and the groupings of individuals responding to the call to action can shift dramatically as the community incorporates the endeavor into their collective identity. Everyone involved should embrace this as they lay the foundation for self-sustaining mass collaboration communities. Once participants develop a firmly entrenched collective identity and sense of ownership, the convening organization can fully tap into its collective genius, as they can work together based on trust and shared vision. Without community ownership, organizers need to allot more time, energy, and resources to keep their initiative moving forward, and to battle against volunteer fatigue, diminished productivity, and substandard output.
  • Focusing follow-up. Turning a massive infusion of creative ideas, concepts, and prototypes into concrete solutions requires a process of focused follow-up. Identifying and nurturing the most promising seeds to fruition requires time, discrete skills, insight, and—depending on the solutions you scale—support from a variety of external organizations.
  • Understanding ROI. Any resource-intensive endeavor where only a few of numerous resulting products ever see the light of day demands deep consideration of what constitutes a reasonable return on investment. For mass collaborations, this means having an initial understanding of the potential tangible and intangible outcomes, and making a frank assessment of whether those outcomes meet the needs of the collaborators.

Technological developments in the last century have enabled relationships between individuals and institutions to blossom into a rich and complex tapestry…”

User motivation and knowledge sharing in idea crowdsourcing


MIIA KOSONEN et al, Int. J. Innov. Mgt.: “We investigate how the propensity to trust, intrinsic motivation, and extrinsic motivation drive the intentions of individuals to share knowledge in idea crowdsourcing. Building on motivation theories and Uses & Gratifications (U&G) approach, we conducted a web-based survey within IdeasProject, an open innovation and brainstorming community dedicated to harvesting ideas. Based on a sample of 244 users, our research shows that the key driver of knowledge-sharing intentions is made up of two intrinsic motivations — social benefits and learning benefits. We also found that recognition from the host company affects intention to share knowledge. From the management point of view, the relative importance of social integrative benefits calls for better facilities available for users to be able to help each other in formulating and developing their ideas. Learning and creativity could be inspired by feedback from professionals and experts, while providing insight into technological advances and features dealing with the current tasks.”

Working Together in a Networked Economy


Yochai Benkler at MIT Technology Review on Distributed Innovation and Creativity, Peer Production, and Commons in a Networked Economy: “A decade ago, Wikipedia and open-source software were treated as mere curiosities in business circles. Today, these innovations represent a core challenge to how we have thought about property and contract, organization theory and management, over the past 150 years.
For the first time since before the Industrial Revolution, the most important inputs into some of the most important economic sectors are radically distributed in the population, and the core capital resources necessary for these economic activities have become widely available in wealthy countries and among the wealthier populations of emerging economies. This technological feasibility of social production generally, and peer production — the kind of network collaboration of which Wikipedia is the most prominent example — more specifically, is interacting with the high rate of change and the escalating complexity of global innovation and production systems.
Increasingly, in the business literature and practice, we see a shift toward a range of open innovation and models that allow more fluid flows of information, talent, and projects across organizations.
Peer production, the most significant organizational innovation that has emerged from Internet-mediated social practice, is large-scale collaborative engagement by groups of individuals who come together to produce products more complex than they could have produced on their own. Organizationally, it combines three core characteristics: decentralization of conception and execution of problems and solutions; harnessing of diverse motivations; and separation of governance and management from property and contract.
These characteristics make peer production highly adept at experimentation, innovation, and adaptation in changing and complex environments. If the Web was innovation on a commons-based model — allocating access and use rights in resources without giving anyone exclusive rights to exclude anyone else — Wikipedia’s organizational innovation is in problem-solving.
Wikipedia’s user-generated content model incorporates knowledge that simply cannot be managed well, either because it is tacit knowledge (possessed by individuals but difficult to communicate to others) or because it is spread among too many people to contract for. The user-generated content model also permits organizations to explore a space of highly diverse interests and tastes that was too costly for traditional organizations to explore.
Peer production allows a diverse range of people, regardless of affiliation, to dynamically assess and reassess available resources, projects, and potential collaborators and to self-assign to projects and collaborations. By leaving these elements to self-organization dynamics, peer production overcomes the lossiness of markets and bureaucracies, and its benefits are sufficient that the practice has been widely adopted by firms and even governments.
In a networked information economy, commons-based practices and open innovation provide an evolutionary model typified by repeated experimentation and adoption of successful adaptation rather than the more traditional, engineering-style approaches to building optimized systems.
Commons-based production and peer production are edge cases of a broader range of openness strategies that trade off the freedom of these two approaches and the manageability and appropriability that many more-traditional organizations seek to preserve. Some firms are using competitions and prizes to diversify the range of people who work on their problems, without ceding contractual control over the project. Many corporations are participating in networks of firms engaging in a range of open collaborative innovation practices with a more manageable set of people, resources, and projects to work with than a fully open-to-the-world project. And the innovation clusters anchored around universities represent an entrepreneurial model at the edge of academia and business, in which academia allows for investment in highly uncertain innovation, and the firms allow for high-risk, high-reward investment models.

To read the full article,  click here.

United States federal government use of crowdsourcing grows six-fold since 2011


at E Pluribus Unum: “Citizensourcing and open innovation can work in the public sector, just as crowdsourcing can in the private sector. Around the world, the use of prizes to spur innovation has been booming for years. The United States of America has been significantly scaling up its use of prizes and challenges to solving grand national challenges since January 2011, when, President Obama signed an updated version of the America COMPETES Act into law.
According to the third congressionally mandated report released by the Obama administration today (PDF/Text), the number of prizes and challenges conducted under the America COMPETES Act has increased by 50% since 2012, 85% since 2012, and nearly six-fold overall since 2011. 25 different federal agencies offered prizes under COMPETES in fiscal year 2013, with 87 prize competitions in total. The size of the prize purses has also grown as well, with 11 challenges over $100,000 in 2013. Nearly half of the prizes conducted in FY 2013 were focused on software, including applications, data visualization tools, and predictive algorithms. Challenge.gov, the award-winning online platform for crowdsourcing national challenges, now has tens of thousands of users who have participated in more than 300 public-sector prize competitions. Beyond the growth in prize numbers and amounts, Obama administration highlighted 4 trends in public-sector prize competitions:

  • New models for public engagement and community building during competitions
  • Growth software and information technology challenges, with nearly 50% of the total prizes in this category
  • More emphasis on sustainability and “creating a post-competition path to success”
  • Increased focus on identifying novel approaches to solving problems

The growth of open innovation in and by the public sector was directly enabled by Congress and the White House, working together for the common good. Congress reauthorized COMPETES in 2010 with an amendment to Section 105 of the act that added a Section 24 on “Prize Competitions,” providing all agencies with the authority to conduct prizes and challenges that only NASA and DARPA has previously enjoyed, and the White House Office of Science and Technology Policy (OSTP), which has been guiding its implementation and providing guidance on the use of challenges and prizes to promote open government.
“This progress is due to important steps that the Obama Administration has taken to make prizes a standard tool in every agency’s toolbox,” wrote Cristin Dorgelo, assistant director for grand challenges in OSTP, in a WhiteHouse.gov blog post on engaging citizen solvers with prizes:

In his September 2009 Strategy for American Innovation, President Obama called on all Federal agencies to increase their use of prizes to address some of our Nation’s most pressing challenges. Those efforts have expanded since the signing of the America COMPETES Reauthorization Act of 2010, which provided all agencies with expanded authority to pursue ambitious prizes with robust incentives.
To support these ongoing efforts, OSTP and the General Services Administration have trained over 1,200 agency staff through workshops, online resources, and an active community of practice. And NASA’s Center of Excellence for Collaborative Innovation (COECI) provides a full suite of prize implementation services, allowing agencies to experiment with these new methods before standing up their own capabilities.

Sun Microsystems co-founder Bill Joy famously once said that “No matter who you are, most of the smartest people work for someone else.” This rings true, in and outside of government. The idea of governments using prizes like this to inspire technological innovation, however, is not reliant on Web services and social media, born from the fertile mind of a Silicon Valley entrepreneur. As the introduction to the third White House prize report  notes:

“One of the most famous scientific achievements in nautical history was spurred by a grand challenge issued in the 18th Century. The issue of safe, long distance sea travel in the Age of Sail was of such great importance that the British government offered a cash award of £20,000 pounds to anyone who could invent a way of precisely determining a ship’s longitude. The Longitude Prize, enacted by the British Parliament in 1714, would be worth some £30 million pounds today, but even by that measure the value of the marine chronometer invented by British clockmaker John Harrison might be a deal.”

Centuries later, the Internet, World Wide Web, mobile devices and social media offer the best platforms in history for this kind of approach to solving grand challenges and catalyzing civic innovation, helping public officials and businesses find new ways to solve old problem. When a new idea, technology or methodology that challenges and improves upon existing processes and systems, it can improve the lives of citizens or the function of the society that they live within….”

Potholes and Big Data: Crowdsourcing Our Way to Better Government


Phil Simon in Wired: “Big Data is transforming many industries and functions within organizations with relatively limited budgets.
Consider Thomas M. Menino, up until recently Boston’s longest-serving mayor. At some point in the past few years, Menino realized that it was no longer 1950. Perhaps he was hobnobbing with some techies from MIT at dinner one night. Whatever his motivation, he decided that there just had to be a better, more cost-effective way to maintain and fix the city’s roads. Maybe smartphones could help the city take a more proactive approach to road maintenance.
To that end, in July 2012, the Mayor’s Office of New Urban Mechanics launched a new project called Street Bump, an app that allows drivers to automatically report the road hazards to the city as soon as they hear that unfortunate “thud,” with their smartphones doing all the work.
The app’s developers say their work has already sparked interest from other cities in the U.S., Europe, Africa and elsewhere that are imagining other ways to harness the technology.
Before they even start their trip, drivers using Street Bump fire up the app, then set their smartphones either on the dashboard or in a cup holder. The app takes care of the rest, using the phone’s accelerometer — a motion detector — to sense when a bump is hit. GPS records the location, and the phone transmits it to an AWS remote server.
But that’s not the end of the story. It turned out that the first version of the app reported far too many false positives (i.e., phantom potholes). This finding no doubt gave ammunition to the many naysayers who believe that technology will never be able to do what people can and that things are just fine as they are, thank you. Street Bump 1.0 “collected lots of data but couldn’t differentiate between potholes and other bumps.” After all, your smartphone or cell phone isn’t inert; it moves in the car naturally because the car is moving. And what about the scores of people whose phones “move” because they check their messages at a stoplight?
To their credit, Menino and his motley crew weren’t entirely discouraged by this initial setback. In their gut, they knew that they were on to something. The idea and potential of the Street Bump app were worth pursuing and refining, even if the first version was a bit lacking. Plus, they have plenty of examples from which to learn. It’s not like the iPad, iPod, and iPhone haven’t evolved considerably over time.
Enter InnoCentive, a Massachusetts-based firm specializing in open innovation and crowdsourcing. The City of Boston contracted InnoCentive to improve Street Bump and reduce the amount of tail chasing. The company accepted the challenge and essentially turned it into a contest, a process sometimes called gamification. InnoCentive offered a network of 400,000 experts a share of $25,000 in prize money donated by Liberty Mutual.
Almost immediately, the ideas to improve Street Bump poured in from unexpected places. This crowd had wisdom. Ultimately, the best suggestions came from:

  • A group of hackers in Somerville, Massachusetts, that promotes community education and research
  • The head of the mathematics department at Grand Valley State University in Allendale, MI.
  • An anonymous software engineer

…Crowdsourcing roadside maintenance isn’t just cool. Increasingly, projects like Street Bump are resulting in substantial savings — and better government.”