The Power to Give


Press Release: “HTC, a global leader in mobile innovation and design, today unveiled HTC Power To Give™, an initiative that aims to create the a supercomputer by harnessing the collective processing power of Android smartphones.
Currently in beta, HTC Power To Give aims to galvanize smartphone owners to unlock their unused processing power in order to help answer some of society’s biggest questions. Currently, the fight against cancer, AIDS and Alzheimer’s; the drive to ensure every child has clean water to drink and even the search for extra-terrestrial life are all being tackled by volunteer computing platforms.
Empowering people to use their Android smartphones to offer tangible support for vital fields of research, including medicine, science and ecology, HTC Power To Give has been developed in partnership with Dr. David Anderson of the University of California, Berkeley.  The project will support the world’s largest volunteer computing initiative and tap into the powerful processing capabilities of a global network of smartphones.
Strength in numbers
One million HTC One smartphones, working towards a project via HTC Power To Give, could provide similar processing power to that of one of the world’s 30 supercomputers (one PetaFLOP). This could drastically shorten the research cycles for organizations that would otherwise have to spend years analyzing the same volume of data, potentially bringing forward important discoveries in vital subjects by weeks, months, years or even decades. For example, one of the programs available at launch is IBM’s World Community Grid, which gives anyone an opportunity to advance science by donating their computer, smartphone or tablet’s unused computing power to humanitarian research. To date, the World Community Grid volunteers have contributed almost 900,000 years’ worth of processing time to cutting-edge research.
Limitless future potential
Cher Wang, Chairwoman, HTC commented, “We’ve often used innovation to bring about change in the mobile industry, but this programme takes our vision one step further. With HTC Power To Give, we want to make it possible for anyone to dedicate their unused smartphone processing power to contribute to projects that have the potential to change the world.”
“HTC Power To Give will support the world’s largest volunteer computing initiative, and the impact that this project will have on the world over the years to come is huge. This changes everything,” noted Dr. David Anderson, Inventor of the Shared Computing Initiative BOINC, University of California, Berkeley.
Cher Wang added, “We’ve been discussing the impact that just one million HTC Power To Give-enabled smartphones could make, however analysts estimate that over 780 million Android phones were shipped in 2013i alone. Imagine the difference we could make to our children’s future if just a fraction of these Android users were able to divert some of their unused processing power to help find answers to the questions that concern us all.”
Opt-in with ease
After downloading the HTC Power To Give app from the Google Play™ store, smartphone owners can select the research programme to which they will divert a proportion of their phone’s processing power. HTC Power To Give will then run while the phone is chargingii  and connected to a WiFi network, enabling people to change the world whilst sitting at their desk or relaxing at home.
The beta version of HTC Power To Give will be available to download from the Google Play store and will initially be compatible with the HTC One family, HTC Butterfly and HTC Butterfly s. HTC plans to make the app more widely available to other Android smartphone owners in the coming six months as the beta trial progresses.”

Crowdsourcing voices to study Parkinson’s disease


TedMed: “Mathematician Max Little is launching a project that aims to literally give Parkinson’s disease (PD) patients a voice in their own diagnosis and help them monitor their disease progression.
Patients Voice Analysis (PVA) is an open science project that uses phone-based voice recordings and self-reported symptoms, along with software Little designed, to track disease progression. Little, a TEDMED 2013 speaker and TED Fellow, is partnering with the online community PatientsLikeMe, co-founded by TEDMED 2009 speaker James Heywood, and Sage Bionetworks, a non-profit research organization, to conduct the research.
The new project is an extension of Little’s Parkinson’s Voice Initiative, which used speech analysis algorithms to diagnose Parkinson’s from voice records with the help of 17,000 volunteers. This time, he seeks to not only detect markers of PD, but also to add information reported by patients using PatientsLikeMe’s Parkinson’s Disease Rating Scale (PDRS), a tool that documents patients’ answers to questions that measure treatment effectiveness and disease progression….
As openly shared information, the collected data has potential to help vast numbers of individuals by tapping into collective ingenuity. Little has long argued that for science to progress, researchers need to democratize research and move past jostling for credit. Sage Bionetworks has designed a platform called Synapse to allow data sharing with collaborative version control, an effort led by open data advocate John Wilbanks.
“If you can’t share your data, how can you reproduce your science? One of the big problems we’re facing with this kind of medical research is the data is not open and getting access to it is a nightmare,” Little says.
With the PVA project, “Basically anyone can log on download the anonymized data and play around with data mining techniques. We don’t really care what people are able to come up with. We just want the most accurate prediction we can get.
“In research, you’re almost always constrained by what you think is the best way to do things. Unless you open it to the community at large, you’ll never know,” he says.”

L’intelligence d’une ville : ses citoyens


Michel Dumais: “Tic toc! disions-nous. Bientôt la centième. Et avec la cent-unième, de nouveaux défis. Ville intelligente, disiez-vous? Je subodore le traditionnel appel de pied aux trois lettres et à une logique administrative archaïque. Et si on faisait plutôt appel à l’intelligence de ceux qui connaissent le plus leur ville, ses citoyens?

Pour régler un problème (et même à l’occasion, un «pas d’problème»), les administrations regardent du côté de ces logiciels mammouth qui, sur papier, sont censés faire tout, qui engloutissent des centaines de millions de dollars, mais qui, finalement, font les manchettes des médias parce qu’il faut y injecter encore plus d’argent. Et qui permettent aux TI d’asseoir encore plus leur contrôle sur une administration.

Bref, lorsque l’on parle de ville intelligente, plusieurs y voient le pactole. Ah! Reste que ce qui était «acceptable», hier, ne l’est plus aujourd’hui. Et que la réalisation d’une ville intelligente n’est surtout pas un défi technologique, loin de là.

LA QUESTION DU SANS-FIL
Il y a des années de cela, la simple logique eut voulu que la Ville cesse de penser «big telcos» afin de conclure rapidement une alliance avec l’organisme communautaire «Île sans fil» et ainsi favoriser le déploiement rapide sur l’île de la technologie sans fil.

Une telle alliance, un modèle dans le genre, existe.

Mais pas à Montréal. Plutôt à Québec, alors que la Ville et l’organisme communautaire «Zap Québec» travaillent main dans la main pour le plus grand bénéfice des citoyens de Québec et des touristes. Et à Montréal? On jase, on jase.

Donc, une ville intelligente. C’est une ville qui sait, à l’aide des technologies, comment harnacher ses infrastructures et les mettre au service de ses citoyens tout en réalisant des économies et en favorisant le développement durable.

C’est aussi une ville qui sait écouter et mobiliser ses citoyens, ses militants et ses entrepreneurs, tout en leur donnant des outils (comme des données utilisables) afin qu’ils puissent eux aussi créer des services destinés à leur organisation et à tous les citoyens de la ville. Sans compter que tous ces outils facilitent la prise de décisions chez les maires d’arrondissement et le comité exécutif.

Bref, une ville intelligente selon le professeur Rudolf Giffinger, c’est ça: «une économie intelligente, une mobilité intelligente, un environnement intelligent, des habitants intelligents, un mode de vie intelligent et, enfin, une administration intelligente».

J’invite le lecteur à regarder LifeApps, une extraordinaire série télé diffusée sur le site de la chaîne AlJazeera. Le sujet: des jeunes et de moins jeunes militants, bidouilleurs, qui s’impliquent et créent des services pour leur communauté.”

NatureNet: a model for crowdsourcing the design of citizen science systems


Paper in CSCW Companion ’14, the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing: “NatureNet is citizen science system designed for collecting bio-diversity data in nature park settings. Park visitors are encouraged to participate in the design of the system in addition to collecting bio-diversity data. Our goal is to increase the motivation to participate in citizen science via crowdsourcing: the hypothesis is that when the crowd plays a role in the design and development of the system, they become stakeholders in the project and work to ensure its success. This paper presents a model for crowdsourcing design and citizen science data collection, and the results from early trials with users that illustrate the potential of this approach.”

Open Data (Updated and Expanded)


As part of an ongoing effort to build a knowledge base for the field of opening governance by organizing and disseminating its learnings, the GovLab Selected Readings series provides an annotated and curated collection of recommended works on key opening governance topics. We start our series with a focus on Open Data. To suggest additional readings on this or any other topic, please email biblio@thegovlab.org.

Data and its uses for GovernanceOpen data refers to data that is publicly available for anyone to use and which is licensed in a way that allows for its re-use. The common requirement that open data be machine-readable not only means that data is distributed via the Internet in a digitized form, but can also be processed by computers through automation, ensuring both wide dissemination and ease of re-use. Much of the focus of the open data advocacy community is on government data and government-supported research data. For example, in May 2013, the US Open Data Policy defined open data as publicly available data structured in a way that enables the data to be fully discoverable and usable by end users, and consistent with a number of principles focused on availability, accessibility and reusability.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)
Fox, Mark S. “City Data: Big, Open and Linked.” Working Paper, Enterprise Integration Laboratory (2013). http://bit.ly/1bFr7oL.

  • This paper examines concepts that underlie Big City Data using data from multiple cities as examples. It begins by explaining the concepts of Open, Unified, Linked, and Grounded data, which are central to the Semantic Web. Fox then explore Big Data as an extension of Data Analytics, and provide case examples of good data analytics in cities.
  • Fox concludes that we can develop the tools that will enable anyone to analyze data, both big and small, by adopting the principles of the Semantic Web:
    • Data being openly available over the internet,
    • Data being unifiable using common vocabularies,
    • Data being linkable using International Resource Identifiers,
    • Data being accessible using a common data structure, namely triples,
    • Data being semantically grounded using Ontologies.

Foulonneau, Muriel, Sébastien Martin, and Slim Turki. “How Open Data Are Turned into Services?” In Exploring Services Science, edited by Mehdi Snene and Michel Leonard, 31–39. Lecture Notes in Business Information Processing 169. Springer International Publishing, 2014. http://bit.ly/1fltUmR.

  • In this chapter, the authors argue that, considering the important role the development of new services plays as a motivation for open data policies, the impact of new services created through open data should play a more central role in evaluating the success of open data initiatives.
  • Foulonneau, Martin and Turki argue that the following metrics should be considered when evaluating the success of open data initiatives: “the usage, audience, and uniqueness of the services, according to the changes it has entailed in the public institutions that have open their data…the business opportunity it has created, the citizen perception of the city…the modification to particular markets it has entailed…the sustainability of the services created, or even the new dialog created with citizens.”

Goldstein, Brett, and Lauren Dyson. Beyond Transparency: Open Data and the Future of Civic Innovation. 1 edition. (Code for America Press: 2013). http://bit.ly/15OAxgF

  • This “cross-disciplinary survey of the open data landscape” features stories from practitioners in the open data space — including Michael Flowers, Brett Goldstein, Emer Colmeman and many others — discussing what they’ve accomplished with open civic data. The book “seeks to move beyond the rhetoric of transparency for transparency’s sake and towards action and problem solving.”
  • The book’s editors seek to accomplish the following objectives:
    • Help local governments learn how to start an open data program
    • Spark discussion on where open data will go next
    • Help community members outside of government better engage with the process of governance
    • Lend a voice to many aspects of the open data community.
  • The book is broken into five sections: Opening Government Data, Building on Open Data, Understanding Open Data, Driving Decisions with Data and Looking Ahead.

Granickas, Karolis. “Understanding the Impact of Releasing and Re-using Open Government Data.” European Public Sector Information Platform, ePSIplatform Topic Report No. 2013/08, (2013). http://bit.ly/GU0Nx4.

  • This paper examines the impact of open government data by exploring the latest research in the field, with an eye toward enabling  an environment for open data, as well as identifying the benefits of open government data and its political, social, and economic impacts.
  • Granickas concludes that to maximize the benefits of open government data: a) further research is required that structure and measure potential benefits of open government data; b) “government should pay more attention to creating feedback mechanisms between policy implementers, data providers and data-re-users”; c) “finding a balance between demand and supply requires mechanisms of shaping demand from data re-users and also demonstration of data inventory that governments possess”; and lastly, d) “open data policies require regular monitoring.”

Gurin, Joel. Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, (New York: McGraw-Hill, 2014). http://amzn.to/1flubWR.

  • In this book, GovLab Senior Advisor and Open Data 500 director Joel Gurin explores the broad realized and potential benefit of Open Data, and how, “unlike Big Data, Open Data is transparent, accessible, and reusable in ways that give it the power to transform business, government, and society.”
  • The book provides “an essential guide to understanding all kinds of open databases – business, government, science, technology, retail, social media, and more – and using those resources to your best advantage.”
  • In particular, Gurin discusses a number of applications of Open Data with very real potential benefits:
    • “Hot Startups: turn government data into profitable ventures;
    • Savvy Marketing: understanding how reputational data drives your brand;
    • Data-Driven Investing: apply new tools for business analysis;
    • Consumer Information: connect with your customers using smart disclosure;
    • Green Business: use data to bet on sustainable companies;
    • Fast R&D: turn the online world into your research lab;
    • New Opportunities: explore open fields for new businesses.”

Jetzek, Thorhildur, Michel Avital, and Niels Bjørn-Andersen. “Generating Value from Open Government Data.” Thirty Fourth International Conference on Information Systems, 5. General IS Topics 2013. http://bit.ly/1gCbQqL.

  • In this paper, the authors “developed a conceptual model portraying how data as a resource can be transformed to value.”
  • Jetzek, Avital and Bjørn-Andersen propose a conceptual model featuring four Enabling Factors (openness, resource governance, capabilities and technical connectivity) acting on four Value Generating Mechanisms (efficiency, innovation, transparency and participation) leading to the impacts of Economic and Social Value.
  • The authors argue that their research supports that “all four of the identified mechanisms positively influence value, reflected in the level of education, health and wellbeing, as well as the monetary value of GDP and environmental factors.”

Kassen, Maxat. “A promising phenomenon of open data: A case study of the Chicago open data project.Government Information Quarterly (2013). http://bit.ly/1ewIZnk.

  • This paper uses the Chicago open data project to explore the “empowering potential of an open data phenomenon at the local level as a platform useful for promotion of civic engagement projects and provide a framework for future research and hypothesis testing.”
  • Kassen argues that “open data-driven projects offer a new platform for proactive civic engagement” wherein governments can harness “the collective wisdom of the local communities, their knowledge and visions of the local challenges, governments could react and meet citizens’ needs in a more productive and cost-efficient manner.”
  • The paper highlights the need for independent IT developers to network in order for this trend to continue, as well as the importance of the private sector in “overall diffusion of the open data concept.”

Keen, Justin, Radu Calinescu, Richard Paige, John Rooksby. “Big data + politics = open data: The case of health care data in England.Policy and Internet 5 (2), (2013): 228–243. http://bit.ly/1i231WS.

  • This paper examines the assumptions regarding open datasets, technological infrastructure and access, using healthcare systems as a case study.
  • The authors specifically address two assumptions surrounding enthusiasm about Big Data in healthcare: the assumption that healthcare datasets and technological infrastructure are up to task, and the assumption of access to this data from outside the healthcare system.
  • By using the National Health Service in England as an example, the authors identify data, technology, and information governance challenges. They argue that “public acceptability of third party access to detailed health care datasets is, at best, unclear,” and that the prospects of Open Data depend on Open Data policies, which are inherently political, and the government’s assertion of property rights over large datasets. Thus, they argue that the “success or failure of Open Data in the NHS may turn on the question of trust in institutions.”

Kulk, Stefan and Bastiaan Van Loenen. “Brave New Open Data World?International Journal of Spatial Data Infrastructures Research, May 14, 2012. http://bit.ly/15OAUYR.

  • This paper examines the evolving tension between the open data movement and the European Union’s privacy regulations, especially the Data Protection Directive.
  • The authors argue, “Technological developments and the increasing amount of publicly available data are…blurring the lines between non-personal and personal data. Open data may not seem to be personal data on first glance especially when it is anonymised or aggregated. However, it may become personal by combining it with other publicly available data or when it is de-anonymised.”

Kundra, Vivek. “Digital Fuel of the 21st Century: Innovation through Open Data and the Network Effect.” Joan Shorenstein Center on the Press, Politics and Public Policy, Harvard College: Discussion Paper Series, January 2012, http://hvrd.me/1fIwsjR.

  • In this paper, Vivek Kundra, the first Chief Information Officer of the United States, explores the growing impact of open data, and argues that, “In the information economy, data is power and we face a choice between democratizing it and holding on to it for an asymmetrical advantage.”
  • Kundra offers four specific recommendations to maximize the impact of open data: Citizens and NGOs must demand open data in order to fight government corruption, improve accountability and government services; Governments must enact legislation to change the default setting of government to open, transparent and participatory; The press must harness the power of the network effect through strategic partnerships and crowdsourcing to cut costs and provide better insights; and Venture capitalists should invest in startups focused on building companies based on public sector data.

Noveck, Beth Simone and Daniel L. Goroff. “Information for Impact: Liberating Nonprofit Sector Data.” The Aspen Institute Philanthropy & Social Innovation Publication Number 13-004. 2013. http://bit.ly/WDxd7p.

  • This report is focused on “obtaining better, more usable data about the nonprofit sector,” which encompasses, as of 2010, “1.5 million tax-exempt organizations in the United States with $1.51 trillion in revenues.”
  • Toward that goal, the authors propose liberating data from the Form 990, an Internal Revenue Service form that “gathers and publishes a large amount of information about tax-exempt organizations,” including information related to “governance, investments, and other factors not directly related to an organization’s tax calculations or qualifications for tax exemption.”
  • The authors recommend a two-track strategy: “Pursuing the longer-term goal of legislation that would mandate electronic filing to create open 990 data, and pursuing a shorter-term strategy of developing a third party platform that can demonstrate benefits more immediately.”

Robinson, David G., Harlan Yu, William P. Zeller, and Edward W. Felten, “Government Data and the Invisible Hand.” Yale Journal of Law & Technology 11 (2009), http://bit.ly/1c2aDLr.

  • This paper proposes a new approach to online government data that “leverages both the American tradition of entrepreneurial self-reliance and the remarkable low-cost flexibility of contemporary digital technology.”
  • “In order for public data to benefit from the same innovation and dynamism that characterize private parties’ use of the Internet, the federal government must reimagine its role as an information provider. Rather than struggling, as it currently does, to design sites that meet each end-user need, it should focus on creating a simple, reliable and publicly accessible infrastructure that ‘exposes’ the underlying data.”
Ubaldi, Barbara. “Open Government Data: Towards Empirical Analysis of Open Government Data Initiatives.” OECD Working Papers on Public Governance. Paris: Organisation for Economic Co-operation and Development, May 27, 2013. http://bit.ly/15OB6qP.

  • This working paper from the OECD seeks to provide an all-encompassing look at the principles, concepts and criteria framing open government data (OGD) initiatives.
  • Ubaldi also analyzes a variety of challenges to implementing OGD initiatives, including policy, technical, economic and financial, organizational, cultural and legal impediments.
  • The paper also proposes a methodological framework for evaluating OGD Initiatives in OECD countries, with the intention of eventually “developing a common set of metrics to consistently assess impact and value creation within and across countries.”

Worthy, Ben. “David Cameron’s Transparency Revolution? The Impact of Open Data in the UK.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, November 29, 2013. http://bit.ly/NIrN6y.

  • In this article, Worthy “examines the impact of the UK Government’s Transparency agenda, focusing on the publication of spending data at local government level. It measures the democratic impact in terms of creating transparency and accountability, public participation and everyday information.”
  • Worthy’s findings, based on surveys of local authorities, interviews and FOI requests, are disappointing. He finds that:
    • Open spending data has led to some government accountability, but largely from those already monitoring government, not regular citizens.
    • Open Data has not led to increased participation, “as it lacks the narrative or accountability instruments to fully bring such effects.”
    • It has also not “created a new stream of information to underpin citizen choice, though new innovations offer this possibility. The evidence points to third party innovations as the key.
  • Despite these initial findings, “Interviewees pointed out that Open Data holds tremendous opportunities for policy-making. Joined up data could significantly alter how policy is made and resources targeted. From small scale issues e.g. saving money through prescriptions to targeting homelessness or health resources, it can have a transformative impact. “

Zuiderwijk, Anneke, Marijn Janssen, Sunil Choenni, Ronald Meijer and Roexsana Sheikh Alibaks. “Socio-technical Impediments of Open Data.” Electronic Journal of e-Government 10, no. 2 (2012). http://bit.ly/17yf4pM.

  • This paper to seeks to identify the socio-technical impediments to open data impact based on a review of the open data literature, as well as workshops and interviews.
  • The authors discovered 118 impediments across ten categories: 1) availability and access; 2) find-ability; 3) usability; 4) understandability; 5) quality; 6) linking and combining data; 7) comparability and compatibility; 8) metadata; 9) interaction with the data provider; and 10) opening and uploading.

Zuiderwijk, Anneke and Marijn Janssen. “Open Data Policies, Their Implementation and Impact: A Framework for Comparison.” Government Information Quarterly 31, no. 1 (January 2014): 17–29. http://bit.ly/1bQVmYT.

  • In this article, Zuiderwijk and Janssen argue that “currently there is a multiplicity of open data policies at various levels of government, whereas very little systematic and structured research [being] done on the issues that are covered by open data policies, their intent and actual impact.”
  • With this evaluation deficit in mind, the authors propose a new framework for comparing open data policies at different government levels using the following elements for comparison:
    • Policy environment and context, such as level of government organization and policy objectives;
    • Policy content (input), such as types of data not publicized and technical standards;
    • Performance indicators (output), such as benefits and risks of publicized data; and
    • Public values (impact).

To stay current on recent writings and developments on Open Data, please subscribe to the GovLab Digest.
Did we miss anything? Please submit reading recommendations to biblio@thegovlab.org or in the comments below.

House Bill Raises Questions about Crowdsourcing


Anne Bowser for Commons Lab (Wilson Center):”A new bill in the House is raising some key questions about how crowdsourcing is understood by scientists, government agencies, policymakers and the public at large.
Robin Bravender’s recent article in Environment & Energy Daily, “House Republicans Push Crowdsourcing on Agency Science,” (subscription required) neatly summarizes the debate around H.R. 4012, a bill introduced to the House of Representatives earlier this month. The House Science, Space and Technology Committe earlier this week held a hearing on the bill, which could see a committee vote as early as next month.
Dubbed the “Secret Science Reform Act of 2014,” the bill prohibits the Environmental Protection Agency (EPA) from “proposing, finalizing, or disseminating regulations or assessments based upon science that is not transparent or reproducible.” If the bill is passed, EPA would be unable to base assessments or regulations on any information not “publicly available in a manner that is sufficient for independent analysis.” This would include all information published in scholarly journals based on data that is not available as open source.
The bill is based on the premise that forcing EPA to use public data will inspire greater transparency by allowing “the crowd” to conduct independent analysis and interpretation. While the premise of involving the public in scientific research is sound, this characterization of crowdsourcing as a process separate from traditional scientific research is deeply problematic.
This division contrasts the current practices of many researchers, who use crowdsourcing to directly involve the public in scientific processes. Galaxy Zoo, for example, enlists digital volunteers (called “citizen scientists”) help classify more than 40 million photographs of galaxies taken by the Hubble Telescope. These crowdsourced morphological classifications are a powerful form of data analysis, a key aspect of the scientific process. Galaxy Zoo then publishes a catalogue of these classifications as an open-source data set. And the data reduction techniques and measures of confidence and bias for the data catalogue are documented in MNRAS, a peer-reviewed journal. A recent Google Scholar search shows that the data set published in MNRAS has been cited a remarkable 121 times.
As this example illustrates, crowdsourcing is often embedded in the process of formal scientific research. But prior to being published in a scientific journal, the crowdsourced contributions of non-professional volunteers are subject to the scrutiny of professional scientists through the rigorous process of peer review. Because peer review was designed as an institution to ensure objective and unbiased research, peer-reviewed scientific work is widely accepted as the best source of information for any science-based decision.
Separating crowdsourcing from the peer review process, as this legislation intends, means that there will be no formal filters in place to ensure that open data will not be abused by special interests. Ellen Silbergeld, a professor at John Hopkins University who testified at the hearing this week, made exactly this point when she pointed to data manipulation commonly practiced by tobacco lobbyists in the United States.
Contributing to scientific research is one goal of crowdsourcing for science. Involving the public in scientific research also increases volunteer understanding of research topics and the scientific process and inspires heightened community engagement. These goals are supported by President Obama’s Second Open Government National Action Plan, which calls for “increased crowdsourcing and citizen science programs” to support “an informed and active citizenry.” But H.R. 4012 does not support these goals. Rather, this legislation could further degrade the public’s understanding of science by encouraging the public to distrust professional scientists rather than collaborate with them.
Crowdsourcing benefits organizations by bringing in the unique expertise held by external volunteers, which can augment and enhance the traditional scientific process. In return, these volunteers benefit from exposure to new and exciting processes, such as scientific research. This mutually beneficial relationship depends on collaboration, not opposition. Supporting an antagonistic relationship between science-based organizations like the EPA and members of “the crowd” will benefit neither institutions, nor volunteers, nor the country as a whole.
 

Innovating for the Global South: New book offers practical insights


Press Release: “Despite the vast wealth generated in the last half century, in today’s world inequality is worsening and poverty is becoming increasingly chronic. Hundreds of millions of people continue to live on less than $2 per day and lack basic human necessities such as nutritious food, shelter, clean water, primary health care, and education.
Innovating for the Global South: Towards an Inclusive Innovation Agenda, the latest book from Rotman-UTP Publishing and the first volume in the Munk Series on Global Affairs, offers fresh solutions for reducing poverty in the developing world. Highlighting the multidisciplinary expertise of the University of Toronto’s Global Innovation Group, leading experts from the fields of engineering, public health, medicine, management, and public policy examine the causes and consequences of endemic poverty and the challenges of mitigating its effects from the perspective of the world’s poorest of the poor.
Can we imagine ways to generate solar energy to run essential medical equipment in the countryside? Can we adapt information and communication technologies to provide up-to-the-minute agricultural market prices for remote farming villages? How do we create more inclusive innovation processes to hear the voices of those living in urban slums? Is it possible to reinvent a low-cost toilet that operates beyond the water and electricity grids?
Motivated by the imperatives of developing, delivering, and harnessing innovation in the developing world, Innovating for the Global South is essential reading for managers, practitioners, and scholars of development, business, and policy.
“As we see it, Innovating for the Global South is fundamentally about innovating scalable solutions that mitigate the effects of poverty and underdevelopment in the Global South. It is not about inventing some new gizmo for some untapped market in the developing world,” say Profs. Dilip Soman and Joseph Wong of the UofT, who are two of the editors of the volume.
The book is edited and also features contributions by three leading UofT thinkers who are tackling innovation in the global south from three different academic perspectives.

  • Dilip Soman is Corus Chair in Communication Strategy and a professor of Marketing at the Rotman School of Management.
  • Janice Gross Stein is the Belzberg Professor of Conflict Management in the Department of Political Science and Director of the Munk School of Global Affairs.
  • Joseph Wong is Ralph and Roz Halbert Professor of Innovation at the Munk School of Global Affairs and Canada Research Chair in Democratization, Health, and Development in the Department of Political Science.

The chapters in the book address the process of innovation from a number of vantage points.
Introduction: Rethinking Innovation – Joseph Wong and Dilip Soman
Chapter 1: Poverty, Invisibility, and Innovation – Joseph Wong
Chapter 2: Behaviourally Informed Innovation – Dilip Soman
Chapter 3: Appropriate Technologies for the Global South – Yu-Ling Cheng (University of Toronto, Chemical Engineering and Applied Chemistry) and Beverly Bradley (University of Toronto, Centre for Global Engineering)
Chapter 4: Globalization of Biopharmaceutical Innovation: Implications for Poor-Market Diseases – Rahim Rezaie (University of Toronto, Munk School of Global Affairs, Research Fellow)
Chapter 5: Embedded Innovation in Health – Anita M. McGahan (University of Toronto, Rotman School of Management, Associate Dean of Research), Rahim Rezaie and Donald C. Cole (University of Toronto, Dalla Lana School of Public Health)
Chapter 6: Scaling Up: The Case of Nutritional Interventions in the Global South – Ashley Aimone Phillips (Registered Dietitian), Nandita Perumal (University of Toronto, Doctoral Fellow, Epidemiology), Carmen Ho (University of Toronto, Doctoral Fellow, Political Science), and Stanley Zlotkin (University of Toronto and the Hospital for Sick Children,Paediatrics, Public Health Sciences and Nutritional Sciences)
Chapter 7: New Models for Financing Innovative Technologies and Entrepreneurial Organizations in the Global South – Murray R. Metcalfe (University of Toronto, Centre for Global Engineering, Globalization)
Chapter 8: Innovation and Foreign Policy – Janice Gross Stein
Conclusion: Inclusive Innovation – Will Mitchell (University of Toronto, Rotman School of Management, Strategic Management), Anita M. McGahan”

Selected Readings on Behavioral Economics: Nudges


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of behavioral economics was originally published in 2014.

The 2008 publication of Richard Thaler and Cass Sunstein’s Nudge ushered in a new era of behavioral economics, and since then, policy makers in the United States and elsewhere have been applying behavioral economics to the field of public policy. Like Smart Disclosure, behavioral economics can be used in the public sector to improve the decisionmaking ability of citizens without relying on regulatory interventions. In the six years since Nudge was published, the United Kingdom has created the Behavioural Insights Team (also known as the Nudge Unit), a cross-ministerial organization that uses behavioral economics to inform public policy, and the White House has recently followed suit by convening a team of behavioral economists to create a behavioral insights-driven team in the United States. Policymakers have been using behavioral insights to design more effective interventions in the fields of long term unemployment; roadway safety; enrollment in retirement plans; and increasing enrollment in organ donation registries, to name some noteworthy examples. The literature of this nascent field provides a look at the growing optimism in the potential of applying behavioral insights in the public sector to improve people’s lives.

Selected Reading List (in alphabetical order)

  • John Beshears, James Choi, David Laibson and Brigitte C. Madrian – The Importance of Default Options for Retirement Savings Outcomes: Evidence from the United States – a paper examining the role default options play in encouraging intelligent retirement savings decisionmaking.
  • Cabinet Office and Behavioural Insights Team, United Kingdom – Applying Behavioural Insights to Healtha paper outlining some examples of behavioral economics being applied to the healthcare landscape using cost-efficient interventions.
  • Matthew Darling, Saugato Datta and Sendhil Mullainathan – The Nature of the BEast: What Behavioral Economics Is Not – a paper discussing why control and behavioral economics are not as closely aligned as some think, reiterating the fact that the field is politically agnostic.
  • Antoinette Schoar and Saugato Datta – The Power of Heuristics – a paper exploring the concept of “heuristics,” or rules of thumb, which can provide helpful guidelines for pushing people toward making “reasonably good” decisions without a full understanding of the complexity of a situation.
  • Richard H. Thaler and Cass R. Sunstein – Nudge: Improving Decisions About Health, Wealth, and Happiness – an influential book describing the many ways in which the principles of behavioral economics can be and have been used to influence choices and behavior through the development of new “choice architectures.” 
  • U.K. Parliament Science and Technology Committee – Behaviour Changean exploration of the government’s attempts to influence the behaviour of its citizens through nudges, with a focus on comparing the effectiveness of nudges to that of regulatory interventions.

Annotated Selected Reading List (in alphabetical order)

Beshears, John, James Choi, David Laibson and Brigitte C. Madrian. “The Importance of Default Options for Retirement Savings Outcomes: Evidence from the United States.” In Jeffrey R. Brown, Jeffrey B. Liebman and David A. Wise, editors, Social Security Policy in a Changing Environment, Cambridge: National Bureau of Economic Research, 2009. http://bit.ly/LFmC5s.

  • This paper examines the role default options play in pushing people toward making intelligent decisions regarding long-term savings and retirement planning.
  • Importantly, the authors provide evidence that a strategically oriented default setting from the outset is likely not enough to fully nudge people toward the best possible decisions in retirement savings. They find that the default settings in every major dimension of the savings process (from deciding whether to participate in a 401(k) to how to withdraw money at retirement) have real and distinct effects on behavior.

Cabinet Office and Behavioural Insights Team, United Kingdom. “Applying Behavioural Insights to Health.” December 2010. http://bit.ly/1eFP16J.

  • In this report, the United Kingdom’s Behavioural Insights Team does not attempt to “suggest that behaviour change techniques are the silver bullet that can solve every problem.” Rather, they explore a variety of examples where local authorities, charities, government and the private-sector are using behavioural interventions to encourage healthier behaviors.  
  • The report features case studies regarding behavioral insights ability to affect the following public health issues:
    • Smoking
    • Organ donation
    • Teenage pregnancy
    • Alcohol
    • Diet and weight
    • Diabetes
    • Food hygiene
    • Physical activity
    • Social care
  • The report concludes with a call for more experimentation and knowledge gathering to determine when, where and how behavioural interventions can be most effective in helping the public become healthier.

Darling, Matthew, Saugato Datta and Sendhil Mullainathan. “The Nature of the BEast: What Behavioral Economics Is Not.” The Center for Global Development. October 2013. https://bit.ly/2QytRmf.

  • In this paper, Darling, Datta and Mullainathan outline the three most pervasive myths that abound within the literature about behavioral economics:
    • First, they dispel the relationship between control and behavioral economics.  Although tools used within behavioral economics can convince people to make certain choices, the goal is to nudge people to make the choices they want to make. For example, studies find that when retirement savings plans change the default to opt-in rather than opt-out, more workers set up 401K plans. This is an example of a nudge that guides people to make a choice that they already intend to make.
    • Second, they reiterate that the field is politically agnostic. Both liberals and conservatives have adopted behavioral economics and its approach is neither liberal nor conservative. President Obama embraces behavioral economics but the United Kingdom’s conservative party does, too.
    • And thirdly, the article highlights that irrationality actually has little to do with behavioral economics. Context is an important consideration when one considers what behavior is rational and what behavior is not. Rather than use the term “irrational” to describe human beings, the authors assert that humans are “infinitely complex” and behavior that is often considered irrational is entirely situational.

Schoar, Antoinette and Saugato Datta. “The Power of Heuristics.” Ideas42. January 2014. https://bit.ly/2UDC5YK.

  • This paper explores the notion that being presented with a bevy of options can be desirable in many situations, but when making an intelligent decision requires a high-level understanding of the nuances of vastly different financial aid packages, for example, options can overwhelm. Heuristics (rules of thumb) provide helpful guidelines that “enable people to make ‘reasonably good’ decisions without needing to understand all the complex nuances of the situation.”
  • The underlying goal heuristics in the policy space involves giving people the type of “rules of thumb” that enable make good decisionmaking regarding complex topics such as finance, healthcare and education. The authors point to the benefit of asking individuals to remember smaller pieces of knowledge by referencing a series of studies conducted by psychologists Beatty and Kahneman that showed people were better able to remember long strings of numbers when they were broken into smaller segments.
  • Schoar and Datta recommend these four rules when implementing heuristics:
    • Use heuristics where possible, particularly in complex situation;
    • Leverage new technology (such as text messages and Internet-based tools) to implement heuristics.
    • Determine where heuristics can be used in adult training programs and replace in-depth training programs with heuristics where possible; and
    • Consider how to apply heuristics in situations where the exception is the rule. The authors point to the example of savings and credit card debt. In most instances, saving a portion of one’s income is a good rule of thumb. However, when one has high credit card debt, paying off debt could be preferable to building one’s savings.

Thaler, Richard H. and Cass R. Sunstein. Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press, 2008. https://bit.ly/2kNXroe.

  • This book, likely the single piece of scholarship most responsible for bringing the concept of nudges into the public consciousness, explores how a strategic “choice architecture” can help people make the best decisions.
  • Thaler and Sunstein, while advocating for the wider and more targeted use of nudges to help improve people’s lives without resorting to overly paternal regulation, look to five common nudges for lessons and inspiration:
    • The design of menus gets you to eat (and spend) more;
    • “Flies” in urinals improve, well, aim;
    • Credit card minimum payments affect repayment schedules;
    • Automatic savings programs increase savings rate; and
    • “Defaults” can improve rates of organ donation.
  • In the simplest terms, the authors propose the wider deployment of choice architectures that follow “the golden rule of libertarian paternalism: offer nudges that are most likely to help and least likely to inflict harm.”

U.K. Parliament Science and Technology Committee. “Behaviour Change.” July 2011. http://bit.ly/1cbYv5j.

  • This report from the U.K.’s Science and Technology Committee explores the government’s attempts to influence the behavior of its citizens through nudges, with a focus on comparing the effectiveness of nudges to that of regulatory interventions.
  • The author’s central conclusion is that, “non-regulatory measures used in isolation, including ‘nudges,’ are less likely to be effective. Effective policies often use a range of interventions.”
  • The report’s other major findings and recommendations are:
    • Government must invest in gathering more evidence about what measures work to influence population behaviour change;
    • They should appoint an independent Chief Social Scientist to provide them with robust and independent scientific advice;
    • The Government should take steps to implement a traffic light system of nutritional labelling on all food packaging; and
    • Current voluntary agreements with businesses in relation to public health have major failings. They are not a proportionate response to the scale of the problem of obesity and do not reflect the evidence about what will work to reduce obesity. If effective agreements cannot be reached, or if they show minimal benefit, the Government should pursue regulation.”

DARPA Open Catalog Makes Agency-Sponsored Software and Publications Available to All


Press Release: “Public website aims to encourage communities interested in DARPA research to build off the agency’s work, starting with big data…
DARPA has invested in many programs that sponsor fundamental and applied research in areas of computer science, which have led to new advances in theory as well as practical software. The R&D community has asked about the availability of results, and now DARPA has responded by creating the DARPA Open Catalog, a place for organizing and sharing those results in the form of software, publications, data and experimental details. The Catalog can be found at http://go.usa.gov/BDhY.
Many DoD and government research efforts and software procurements contain publicly releasable elements, including open source software. The nature of open source software lends itself to collaboration where communities of developers augment initial products, build on each other’s expertise, enable transparency for performance evaluation, and identify software vulnerabilities. DARPA has an open source strategy for areas of work including big data to help increase the impact of government investments in building a flexible technology base.
“Making our open source catalog available increases the number of experts who can help quickly develop relevant software for the government,” said Chris White, DARPA program manager. “Our hope is that the computer science community will test and evaluate elements of our software and afterward adopt them as either standalone offerings or as components of their products.”

"Natural Cities" Emerge from Social Media Location Data


Emerging Technology From the arXiv: “Nobody agrees on how to define a city. But the emergence of “natural cities” from social media data sets may change that, say computational geographers…
A city is a large, permanent human settlement. But try and define it more carefully and you’ll soon run into trouble. A settlement that qualifies as a city in Sweden may not qualify in China, for example. And the reasons why one settlement is classified as a town while another as a city can sometimes seem almost arbitrary.
City planners know this problem well.  They tend to define cities by administrative, legal or even historical boundaries that have little logic to them. Indeed, the same city can sometimes be defined in various different ways.
That causes all kinds of problems from counting the total population to working out who pays for the upkeep of the place.  Which definition do you use?
Now help may be at hand thanks to the work of Bin Jiang and Yufan Miao at the University of Gävle in Sweden. These guys have found a way to use people’s location recorded by social media to define the boundaries of so-called natural cities which have a close resemblance to real cities in the US.
Jiang and Miao began with a dataset from the Brightkite social network, which was active between 2008 and 2010. The site encouraged users to log in with their location details so that they could see other users nearby. So the dataset consists of almost 3 million locations in the US and the dates on which they were logged.
To start off, Jiang and Miao simply placed a dot on a map at the location of each login. They then connected these dots to their neighbours to form triangles that end up covering the entire mainland US.
Next, they calculated the size of each triangle on the map and plotted this size distribution, which turns out to follow a power law. So there are lots of tiny triangles but only a few  large ones.
Finally, the calculated the average size of the triangles and then coloured in all those that were smaller than average. The coloured areas are “natural cities”, say Jiang and Miao.
It’s easy to imagine that resulting map of triangles is of little value.  But to the evident surprise of ther esearchers, it produces a pretty good approximation of the cities in the US. “We know little about why the procedure works so well but the resulting patterns suggest that the natural cities effectively capture the evolution of real cities,” they say.
That’s handy because it suddenly gives city planners a way to study and compare cities on a level playing field. It allows them to see how cities evolve and change over time too. And it gives them a way to analyse how cities in different parts of the world differ.
Of course, Jiang and Miao will want to find out why this approach reveals city structures in this way. That’s still something of a puzzle but the answer itself may provide an important insight into the nature of cities (or at least into the nature of this dataset).
A few days ago, this blog wrote about how a new science of cities is emerging from the analysis of big data.  This is another example and expect to see more.
Ref:  http://arxiv.org/abs/1401.6756 : The Evolution of Natural Cities from the Perspective of Location-Based Social Media”