Jane Sarasohn-Kahn at iHealthBeat: “The routine operation of modern health care systems produces an abundance of electronically stored data on an ongoing basis,” Sebastian Schneeweis writes in a recent New England Journal of Medicine Perspective.
Is this abundance of data a treasure trove for improving patient care and growing knowledge about effective treatments? Is that data trove a Pandora’s black box that can be mined by obscure third parties to benefit for-profit companies without rewarding those whose data are said to be the new currency of the economy? That is, patients themselves?
In this emerging world of data analytics in health care, there’s Big Data and there’s My Data (“small data”). Who most benefits from the use of My Data may not actually be the consumer.
Big focus on Big Data. Several reports published in the first half of 2014 talk about the promise and perils of Big Data in health care. The Federal Trade Commission’s study, titled “Data Brokers: A Call for Transparency and Accountability,” analyzed the business practices of nine “data brokers,” companies that buy and sell consumers’ personal information from a broad array of sources. Data brokers sell consumers’ information to buyers looking to use those data for marketing, managing financial risk or identifying people. There are health implications in all of these activities, and the use of such data generally is not covered by HIPAA. The report discusses the example of a data segment called “Smoker in Household,” which a company selling a new air filter for the home could use to target-market to an individual who might seek such a product. On the downside, without the consumers’ knowledge, the information could be used by a financial services company to identify the consumer as a bad health insurance risk.
“Big Data and Privacy: A Technological Perspective,” a report from the President’s Office of Science and Technology Policy, considers the growth of Big Data’s role in helping inform new ways to treat diseases and presents two scenarios of the “near future” of health care. The first, on personalized medicine, recognizes that not all patients are alike or respond identically to treatments. Data collected from a large number of similar patients (such as digital images, genomic information and granular responses to clinical trials) can be mined to develop a treatment with an optimal outcome for the patients. In this case, patients may have provided their data based on the promise of anonymity but would like to be informed if a useful treatment has been found. In the second scenario, detecting symptoms via mobile devices, people wishing to detect early signs of Alzheimer’s Disease in themselves use a mobile device connecting to a personal couch in the Internet cloud that supports and records activities of daily living: say, gait when walking, notes on conversations and physical navigation instructions. For both of these scenarios, the authors ask, “Can the information about individuals’ health be sold, without additional consent, to third parties? What if this is a stated condition of use of the app? Should information go to the individual’s personal physicians with their initial consent but not a subsequent confirmation?”
The World Privacy Foundation’s report, titled “The Scoring of America: How Secret Consumer Scores Threaten Your Privacy and Your Future,” describes the growing market for developing indices on consumer behavior, identifying over a dozen health-related scores. Health scores include the Affordable Care Act Individual Health Risk Score, the FICO Medication Adherence Score, various frailty scores, personal health scores (from WebMD and OneHealth, whose default sharing setting is based on the user’s sharing setting with the RunKeeper mobile health app), Medicaid Resource Utilization Group Scores, the SF-36 survey on physical and mental health and complexity scores (such as the Aristotle score for congenital heart surgery). WPF presents a history of consumer scoring beginning with the FICO score for personal creditworthiness and recommends regulatory scrutiny on the new consumer scores for fairness, transparency and accessibility to consumers.
At the same time these three reports went to press, scores of news stories emerged discussing the Big Opportunities Big Data present. The June issue of CFO Magazine published a piece called “Big Data: Where the Money Is.” InformationWeek published “Health Care Dives Into Big Data,” Motley Fool wrote about “Big Data’s Big Future in Health Care” and WIRED called “Cloud Computing, Big Data and Health Care” the “trifecta.”
Well-timed on June 5, the Office of the National Coordinator for Health IT’s Roadmap for Interoperability was detailed in a white paper, titled “Connecting Health and Care for the Nation: A 10-Year Vision to Achieve an Interoperable Health IT Infrastructure.” The document envisions the long view for the U.S. health IT ecosystem enabling people to share and access health information, ensuring quality and safety in care delivery, managing population health, and leveraging Big Data and analytics. Notably, “Building Block #3” in this vision is ensuring privacy and security protections for health information. ONC will “support developers creating health tools for consumers to encourage responsible privacy and security practices and greater transparency about how they use personal health information.” Looking forward, ONC notes the need for “scaling trust across communities.”
Consumer trust: going, going, gone? In the stakeholder community of U.S. consumers, there is declining trust between people and the companies and government agencies with whom people deal. Only 47% of U.S. adults trust companies with whom they regularly do business to keep their personal information secure, according to a June 6 Gallup poll. Furthermore, 37% of people say this trust has decreased in the past year. Who’s most trusted to keep information secure? Banks and credit card companies come in first place, trusted by 39% of people, and health insurance companies come in second, trusted by 26% of people.
Trust is a basic requirement for health engagement. Health researchers need patients to share personal data to drive insights, knowledge and treatments back to the people who need them. PatientsLikeMe, the online social network, launched the Data for Good project to inspire people to share personal health information imploring people to “Donate your data for You. For Others. For Good.” For 10 years, patients have been sharing personal health information on the PatientsLikeMe site, which has developed trusted relationships with more than 250,000 community members…”
The Art and Science of Data-driven Journalism
Alex Howard for the Tow Center for digital journalism: “Journalists have been using data in their stories for as long as the profession has existed. A revolution in computing in the 20th century created opportunities for data integration into investigations, as journalists began to bring technology into their work. In the 21st century, a revolution in connectivity is leading the media toward new horizons. The Internet, cloud computing, agile development, mobile devices, and open source software have transformed the practice of journalism, leading to the emergence of a new term: data journalism. Although journalists have been using data in their stories for as long as they have been engaged in reporting, data journalism is more than traditional journalism with more data. Decades after early pioneers successfully applied computer-assisted reporting and social science to investigative journalism, journalists are creating news apps and interactive features that help people understand data, explore it, and act upon the insights derived from it. New business models are emerging in which data is a raw material for profit, impact, and insight, co-created with an audience that was formerly reduced to passive consumption. Journalists around the world are grappling with the excitement and the challenge of telling compelling stories by harnessing the vast quantity of data that our increasingly networked lives, devices, businesses, and governments produce every day. While the potential of data journalism is immense, the pitfalls and challenges to its adoption throughout the media are similarly significant, from digital literacy to competition for scarce resources in newsrooms. Global threats to press freedom, digital security, and limited access to data create difficult working conditions for journalists in many countries. A combination of peer-to-peer learning, mentorship, online training, open data initiatives, and new programs at journalism schools rising to the challenge, however, offer reasons to be optimistic about more journalists learning to treat data as a source. (Download the report)”
Crowdsourcing and social search
Lyndsey Gilpin at Techcrunch: “When we think of the sharing economy, what often comes to mind are sites like Airbnb, Lyft, or Feastly — the platforms that allow us to meet people for a specific reason, whether that’s a place to stay, a ride, or a meal.
But what about sharing something much simpler than that, like answers to our questions about the world around us? Sharing knowledge with strangers can offer us insight into a place we are curious about or trying to navigate, and in a more personal, efficient way than using traditional web searches.
“Sharing an answer or response to question, that is true sharing. There’s no financial or monetary exchange based on that. It’s the true meaning of [the word],” said Maxime Leroy, co-founder and CEO of a new app called Enquire.
Enquire is a new question-and-answer app, but it is unlike others in the space. You don’t have to log in via Facebook or Twitter, use SMS messaging like on Quest, or upload an image like you do on Jelly. None of these apps have taken off yet, which could be good or bad for Enquire just entering the space.
With Enquire, simply log in with a username and password and it will unlock the neighborhood you are in (the app only works in San Francisco, New York, and Paris right now). There are lists of answers to other questions, or you can post your own. If 200 people in a city sign up, the app will become available to them, which is an effort to make sure there is a strong community to gather answers from.
Leroy, who recently made a documentary about the sharing economy, realized there was “one tool missing for local communities” in the space, and decided to create this app.
“We want to build a more local-based network, and empower and increase trust without having people share all their identity,” he said.
Different social channels look at search in different ways, but the trend is definitely moving to more social searching or location-based searching, according to according to Altimeter social media analyst Rebecca Lieb. Arguably, she said, Yelp, Groupon, and even Google Maps are vertical search engines. If you want to find a nearby restaurant, pharmacy, or deal, you look to these platforms.
However, she credits Aardvark as one of the first in the space, which was a social search engine founded in 2007 that used instant messaging and email to get answers from your existing contacts. Google bought the company in 2010. It shows the idea of crowdsourcing answers isn’t new, but the engines have become “appified,” she said.
“Now it’s geo-local specific,” she said. “We’re asking a lot more of those geo-local questions because of location-based immediacy [that we want].”
Think Seamless, with which you find the food nearby that most satisfies your appetite. Even Tinder and Grindr are social search engines, Lieb said. You want to meet up with the people that are closest to you, geographically….
His challenge is to offer rewards to incite people to sign up for the app. Eventually, Leroy would like to strengthen the networks and scale Enquire to cities and neighborhoods all over the world. Once that’s in place, people can start creating their own neighborhoods — around a school or workplace, where they hang out regularly — instead of using the existing constraints.
“I may be an expert in one area, and a newbie in another. I want to emphasize the activity and content from users to give them credit to other users and build that trust,” he said.
Usually, our first instinct is to open Yelp to find the best sushi restaurant or Google to search the closest concert venue, and it will probably stay that way for some time. But the idea that the opinions and insights of other human beings, even of strangers, is becoming much more valuable because of the internet is not far-fetched.
Admit it: haven’t you had a fleeting thought of starting a Kickstarter campaign for an idea? Looked for a cheaper place to stay on Airbnb than that hotel you normally book in New York? Or considered financing someone’s business idea across the world using Kiva? If so, then you’ve engaged in social search.
Suddenly, crowdsourcing answers for the things that pique your interest on your morning walk may not seem so strange after all.”
Selected Readings on Crowdsourcing Tasks and Peer Production
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Technological advances are creating a new paradigm by which institutions and organizations are increasingly outsourcing tasks to an open community, allocating specific needs to a flexible, willing and dispersed workforce. “Microtasking” platforms like Amazon’s Mechanical Turk are a burgeoning source of income for individuals who contribute their time, skills and knowledge on a per-task basis. In parallel, citizen science projects – task-based initiatives in which citizens of any background can help contribute to scientific research – like Galaxy Zoo are demonstrating the ability of lay and expert citizens alike to make small, useful contributions to aid large, complex undertakings. As governing institutions seek to do more with less, looking to the success of citizen science and microtasking initiatives could provide a blueprint for engaging citizens to help accomplish difficult, time-consuming objectives at little cost. Moreover, the incredible success of peer-production projects – best exemplified by Wikipedia – instills optimism regarding the public’s willingness and ability to complete relatively small tasks that feed into a greater whole and benefit the public good. You can learn more about this new wave of “collective intelligence” by following the MIT Center for Collective Intelligence and their annual Collective Intelligence Conference.
Selected Reading List (in alphabetical order)
- Yochai Benkler — The Wealth of Networks: How Social Production Transforms Markets and Freedom — a book on the ways commons-based peer-production is transforming modern society.
- Daren C. Brabham — Using Crowdsourcing in Government — a report describing the diversity of methods crowdsourcing could be greater utilized by governments, including through the leveraging of micro-tasking platforms.
- Kevin J. Boudreau, Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley – From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers – a working paper exploring the conditions,
- including incentives, that affect online collaboration.
- Chiara Franzoni and Henry Sauermann — Crowd Science: The Organization of Scientific Research in Open Collaborative Projects — a paper describing the potential advantages of deploying crowd science in a variety of contexts.
- Aniket Kittur, Ed H. Chi and Bongwon Suh — Crowdsourcing User Studies with Mechanical Turk — a paper proposing potential benefits beyond simple task completion for microtasking platforms like Mechanical Turk.
- Aniket Kittur, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton — The Future of Crowd Work — a paper describing the promise of increased and evolved crowd work’s effects on the global economy.
- Michael J. Madison — Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo — an in-depth case study of the Galaxy Zoo containing insights regarding the importance of clear objectives and institutional and/or professional collaboration in citizen science initiatives.
- Thomas W. Malone, Robert Laubacher and Chrysanthos Dellarocas – Harnessing Crowds: Mapping the Genome of Collective Intelligence – an article proposing a framework for understanding collective intelligence efforts.
- Geoff Mulgan – True Collective Intelligence? A Sketch of a Possible New Field – a paper proposing theoretical building blocks and an experimental and research agenda around the field of collective intelligence.
- Henry Sauermann and Chiara Franzoni – Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation – a paper exploring the role of interest-based motivation in collaborative knowledge production.
- Catherine E. Schmitt-Sands and Richard J. Smith – Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk – an article describing an experiment using Mechanical Turk to crowdsource public policy research microtasks.
- Clay Shirky — Here Comes Everybody: The Power of Organizing Without Organizations — a book exploring the ways largely unstructured collaboration is remaking practically all sectors of modern life.
- Jonathan Silvertown — A New Dawn for Citizen Science — a paper examining the diverse factors influencing the emerging paradigm of “science by the people.”
- Katarzyna Szkuta, Roberto Pizzicannella, David Osimo – Collaborative approaches to public sector innovation: A scoping study – an article studying success factors and incentives around the collaborative delivery of online public services.
Annotated Selected Reading List (in alphabetical order)
Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006. http://bit.ly/1aaU7Yb.
- In this book, Benkler “describes how patterns of information, knowledge, and cultural production are changing – and shows that the way information and knowledge are made available can either limit or enlarge the ways people can create and express themselves.”
- In his discussion on Wikipedia – one of many paradigmatic examples of people collaborating without financial reward – he calls attention to the notable ongoing cooperation taking place among a diversity of individuals. He argues that, “The important point is that Wikipedia requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in…”
Brabham, Daren C. Using Crowdsourcing in Government. Collaborating Across Boundaries Series. IBM Center for The Business of Government, 2013. http://bit.ly/17gzBTA.
- In this report, Brabham categorizes government crowdsourcing cases into a “four-part, problem-based typology, encouraging government leaders and public administrators to consider these open problem-solving techniques as a way to engage the public and tackle difficult policy and administrative tasks more effectively and efficiently using online communities.”
- The proposed four-part typology describes the following types of crowdsourcing in government:
- Knowledge Discovery and Management
- Distributed Human Intelligence Tasking
- Broadcast Search
- Peer-Vetted Creative Production
- In his discussion on Distributed Human Intelligence Tasking, Brabham argues that Amazon’s Mechanical Turk and other microtasking platforms could be useful in a number of governance scenarios, including:
- Governments and scholars transcribing historical document scans
- Public health departments translating health campaign materials into foreign languages to benefit constituents who do not speak the native language
- Governments translating tax documents, school enrollment and immunization brochures, and other important materials into minority languages
- Helping governments predict citizens’ behavior, “such as for predicting their use of public transit or other services or for predicting behaviors that could inform public health practitioners and environmental policy makers”
Boudreau, Kevin J., Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley. “From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 14-060. January 23, 2014. https://bit.ly/2QVmGUu.
- In this working paper, the authors explore the “conditions necessary for eliciting effort from those affecting the quality of interdependent teamwork” and “consider the the role of incentives versus social processes in catalyzing collaboration.”
- The paper’s findings are based on an experiment involving 260 individuals randomly assigned to 52 teams working toward solutions to a complex problem.
- The authors determined the level of effort in such collaborative undertakings are sensitive to cash incentives. However, collaboration among teams was driven more by the active participation of teammates, rather than any monetary reward.
Franzoni, Chiara, and Henry Sauermann. “Crowd Science: The Organization of Scientific Research in Open Collaborative Projects.” Research Policy (August 14, 2013). http://bit.ly/HihFyj.
- In this paper, the authors explore the concept of crowd science, which they define based on two important features: “participation in a project is open to a wide base of potential contributors, and intermediate inputs such as data or problem solving algorithms are made openly available.” The rationale for their study and conceptual framework is the “growing attention from the scientific community, but also policy makers, funding agencies and managers who seek to evaluate its potential benefits and challenges. Based on the experiences of early crowd science projects, the opportunities are considerable.”
- Based on the study of a number of crowd science projects – including governance-related initiatives like Patients Like Me – the authors identify a number of potential benefits in the following categories:
- Knowledge-related benefits
- Benefits from open participation
- Benefits from the open disclosure of intermediate inputs
- Motivational benefits
- The authors also identify a number of challenges:
- Organizational challenges
- Matching projects and people
- Division of labor and integration of contributions
- Project leadership
- Motivational challenges
- Sustaining contributor involvement
- Supporting a broader set of motivations
- Reconciling conflicting motivations
Kittur, Aniket, Ed H. Chi, and Bongwon Suh. “Crowdsourcing User Studies with Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453–456. CHI ’08. New York, NY, USA: ACM, 2008. http://bit.ly/1a3Op48.
- In this paper, the authors examine “[m]icro-task markets, such as Amazon’s Mechanical Turk, [which] offer a potential paradigm for engaging a large number of users for low time and monetary costs. [They] investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks.”
- The authors conclude that in addition to providing a means for crowdsourcing small, clearly defined, often non-skill-intensive tasks, “Micro-task markets such as Amazon’s Mechanical Turk are promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.”
Kittur, Aniket, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. “The Future of Crowd Work.” In 16th ACM Conference on Computer Supported Cooperative Work (CSCW 2013), 2012. http://bit.ly/1c1GJD3.
- In this paper, the authors discuss paid crowd work, which “offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale.” However, they caution that, “it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework.”
- The authors argue that seven key challenges must be met to ensure that crowd work processes evolve and reach their full potential:
- Designing workflows
- Assigning tasks
- Supporting hierarchical structure
- Enabling real-time crowd work
- Supporting synchronous collaboration
- Controlling quality
Madison, Michael J. “Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo.” In Convening Cultural Commons, 2013. http://bit.ly/1ih9Xzm.
- This paper explores a “case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place, Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis on the Internet. In the second place, Galaxy Zoo is a highly successful example of peer production, some times known as crowdsourcing…In the third place, is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies.”
- Madison concludes that the success of Galaxy Zoo has not been the result of the “character of its information resources (scientific data) and rules regarding their usage,” but rather, the fact that the “community was guided from the outset by a vision of a specific organizational solution to a specific research problem in astronomy, initiated and governed, over time, by professional astronomers in collaboration with their expanding universe of volunteers.”
Malone, Thomas W., Robert Laubacher and Chrysanthos Dellarocas. “Harnessing Crowds: Mapping the Genome of Collective Intelligence.” MIT Sloan Research Paper. February 3, 2009. https://bit.ly/2SPjxTP.
- In this article, the authors describe and map the phenomenon of collective intelligence – also referred to as “radical decentralization, crowd-sourcing, wisdom of crowds, peer production, and wikinomics – which they broadly define as “groups of individuals doing things collectively that seem intelligent.”
- The article is derived from the authors’ work at MIT’s Center for Collective Intelligence, where they gathered nearly 250 examples of Web-enabled collective intelligence. To map the building blocks or “genes” of collective intelligence, the authors used two pairs of related questions:
- Who is performing the task? Why are they doing it?
- What is being accomplished? How is it being done?
- The authors concede that much work remains to be done “to identify all the different genes for collective intelligence, the conditions under which these genes are useful, and the constraints governing how they can be combined,” but they believe that their framework provides a useful start and gives managers and other institutional decisionmakers looking to take advantage of collective intelligence activities the ability to “systematically consider many possible combinations of answers to questions about Who, Why, What, and How.”
Mulgan, Geoff. “True Collective Intelligence? A Sketch of a Possible New Field.” Philosophy & Technology 27, no. 1. March 2014. http://bit.ly/1p3YSdd.
- In this paper, Mulgan explores the concept of a collective intelligence, a “much talked about but…very underdeveloped” field.
- With a particular focus on health knowledge, Mulgan “sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to possible intellectual barriers to progress.”
- He concludes that the “central message that comes from observing real intelligence is that intelligence has to be for something,” and that “turning this simple insight – the stuff of so many science fiction stories – into new theories, new technologies and new applications looks set to be one of the most exciting prospects of the next few years and may help give shape to a new discipline that helps us to be collectively intelligent about our own collective intelligence.”
Sauermann, Henry and Chiara Franzoni. “Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation.” SSRN Working Papers Series. November 28, 2013. http://bit.ly/1o6YB7f.
- In this paper, Sauremann and Franzoni explore the issue of interest-based motivation in crowd-based knowledge production – in particular the use of the crowd science platform Zooniverse – by drawing on “research in psychology to discuss important static and dynamic features of interest and deriv[ing] a number of research questions.”
- The authors find that interest-based motivation is often tied to a “particular object (e.g., task, project, topic)” not based on a “general trait of the person or a general characteristic of the object.” As such, they find that “most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return.”
- They conclude that “interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production…However, both the scope and sustainability of this interest appear to be rather limited for the large majority of contributors…At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish.”
Schmitt-Sands, Catherine E. and Richard J. Smith. “Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk.” SSRN Working Papers Series. January 9, 2014. http://bit.ly/1ugaYja.
- In this paper, the authors describe an experiment involving the nascent use of Amazon’s Mechanical Turk as a social science research tool. “While researchers have used crowdsourcing to find research subjects or classify texts, [they] used Mechanical Turk to conduct a policy scan of local government websites.”
- Schmitt-Sands and Smith found that “crowdsourcing worked well for conducting an online policy program and scan.” The microtasked workers were helpful in screening out local governments that either did not have websites or did not have the types of policies and services for which the researchers were looking. However, “if the task is complicated such that it requires ongoing supervision, then crowdsourcing is not the best solution.”
Shirky, Clay. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press, 2008. https://bit.ly/2QysNif.
- In this book, Shirky explores our current era in which, “For the first time in history, the tools for cooperating on a global scale are not solely in the hands of governments or institutions. The spread of the Internet and mobile phones are changing how people come together and get things done.”
- Discussing Wikipedia’s “spontaneous division of labor,” Shirky argues that the process is like, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”
Silvertown, Jonathan. “A New Dawn for Citizen Science.” Trends in Ecology & Evolution 24, no. 9 (September 2009): 467–471. http://bit.ly/1iha6CR.
- This article discusses the move from “Science for the people,” a slogan adopted by activists in the 1970s to “’Science by the people,’ which is “a more inclusive aim, and is becoming a distinctly 21st century phenomenon.”
- Silvertown identifies three factors that are responsible for the explosion of activity in citizen science, each of which could be similarly related to the crowdsourcing of skills by governing institutions:
- “First is the existence of easily available technical tools for disseminating information about products and gathering data from the public.
- A second factor driving the growth of citizen science is the increasing realisation among professional scientists that the public represent a free source of labour, skills, computational power and even finance.
- Third, citizen science is likely to benefit from the condition that research funders such as the National Science Foundation in the USA and the Natural Environment Research Council in the UK now impose upon every grantholder to undertake project-related science outreach. This is outreach as a form of public accountability.”
Szkuta, Katarzyna, Roberto Pizzicannella, David Osimo. “Collaborative approaches to public sector innovation: A scoping study.” Telecommunications Policy. 2014. http://bit.ly/1oBg9GY.
- In this article, the authors explore cases where government collaboratively delivers online public services, with a focus on success factors and “incentives for services providers, citizens as users and public administration.”
- The authors focus on six types of collaborative governance projects:
- Services initiated by government built on government data;
- Services initiated by government and making use of citizens’ data;
- Services initiated by civil society built on open government data;
- Collaborative e-government services; and
- Services run by civil society and based on citizen data.
- The cases explored “are all designed in the way that effectively harnesses the citizens’ potential. Services susceptible to collaboration are those that require computing efforts, i.e. many non-complicated tasks (e.g. citizen science projects – Zooniverse) or citizens’ free time in general (e.g. time banks). Those services also profit from unique citizens’ skills and their propensity to share their competencies.”
Cluster mapping
“The U.S. Cluster Mapping Project is a national economic initiative that provides open, interactive data to understand regional clusters and support business, innovation and policy in the United States. It is based at the Institute for Strategy and Competitiveness at Harvard Business School, with support from a number of partners and a federal grant from the U.S. Department of Commerce’s Economic Development Administration.
Research
The project provides a robust cluster mapping database grounded in the leading academic research. Professor Michael Porter pioneered the comprehensive mapping of clusters in the U.S. economy in the early 2000s. The research team from Harvard, MIT, and Temple used the latest Census and industry data to develop a new algorithm to define cluster categories that cover the entire U.S. economy. These categories enable comparative analyses of clusters across any region in the United States….
Impact
Research on the presence of regional clusters has recently oriented economic policy toward addressing the needs of clusters and mobilizing their potential. Four regional partners in Massachusetts, Minnesota, Oregon, and South Carolina produced a set of case studies that discuss how regions have organized economic policy around clusters. These cases form the core of a resource library that aims to disseminate insights and strengthen the community of practice in cluster-based economic development. The project will also take an international scope to benefit cross-border industries in North America and inform collective global dialogue around cluster-based economic development.”
A brief history of open data
Article by Luke Fretwell in FCW: “In December 2007, 30 open-data pioneers gathered in Sebastopol, Calif., and penned a set of eight open-government data principles that inaugurated a new era of democratic innovation and economic opportunity.
“The objective…was to find a simple way to express values that a bunch of us think are pretty common, and these are values about how the government could make its data available in a way that enables a wider range of people to help make the government function better,” Harvard Law School Professor Larry Lessig said. “That means more transparency in what the government is doing and more opportunity for people to leverage government data to produce insights or other great business models.”
The eight simple principles — that data should be complete, primary, timely, accessible, machine-processable, nondiscriminatory, nonproprietary and license-free — still serve as the foundation for what has become a burgeoning open-data movement.
In the seven years since those principles were released, governments around the world have adopted open-data initiatives and launched platforms that empower researchers, journalists and entrepreneurs to mine this new raw material and its potential to uncover new discoveries and opportunities. Open data has drawn civic hacker enthusiasts around the world, fueling hackathons, challenges, apps contests, barcamps and “datapaloozas” focused on issues as varied as health, energy, finance, transportation and municipal innovation.
In the United States, the federal government initiated the beginnings of a wide-scale open-data agenda on President Barack Obama’s first day in office in January 2009, when he issued his memorandum on transparency and open government, which declared that “openness will strengthen our democracy and promote efficiency and effectiveness in government.” The president gave federal agencies three months to provide input into an open-government directive that would eventually outline what each agency planned to do with respect to civic transparency, collaboration and participation, including specific objectives related to releasing data to the public.
In May of that year, Data.gov launched with just 47 datasets and a vision to “increase public access to high-value, machine-readable datasets generated by the executive branch of the federal government.”
When the White House issued the final draft of its federal Open Government Directive later that year, the U.S. open-government data movement got its first tangible marching orders, including a 45-day deadline to open previously unreleased data to the public.
Now five years after its launch, Data.gov boasts more than 100,000 datasets from 227 local, state and federal agencies and organizations….”
Open Government Will Reshape Latin America
Alejandro Guerrero at Medium: “When people think on the place for innovations, they typically think on innovation being spurred by large firms and small startups based in the US. And particularly in that narrow stretch of land and water called Silicon Valley.
However, the flux of innovation taking place in the intersection between technology and government is phenomenal and emerging everywhere. From the marble hallways of parliaments everywhere —including Latin America’s legislative houses— to office hubs of tech-savvy non-profits full of enthusiastic social changers —also including Latin American startups— a driving force is starting to challenge our conception of how government and citizens can and should interact. And few people are discussing or analyzing these developments.
Open Government in Latin America
The potential for Open Government to improve government’s decision-making and performance is huge. And it is particularly immense in middle income countries such as the ones in Latin America, where the combination of growing incomes, more sophisticated citizens’ demands, and broken public services is generating a large bottom-up pressure and requesting more creative solutions from governments to meet the enormous social needs, while cutting down corruption and improving governance.
It is unsurprising that citizens from all over Latin America are increasingly taking the streets and demanding better public services and more transparent institutions.
While these protests are necessarily short-lived and unarticulated —a product of growing frustration with government— they are a symptom with deeper causes that won’t go easily away, and these protests will most likely come back with increasing frequency and the unresolved frustration may eventually transmute in political platforms with more radical ideas to challenge the status quo.
Behind the scene, governments across the region still face enormous weaknesses in public management, ill-prepared and underpaid public officials carry on with their duties as the platonic idea of a demotivated workforce, and the opportunities for corruption, waste, and nepotism are plenty. The growing segment of more affluent citizens simply opt out from government and resort to private alternatives, thus exacerbating inequalities in the already most unequal region in the world. The crumbling middle classes and the poor can just resort to voicing their complaints. And they are increasingly doing so.
And here is where open government initiatives might play a transformative role, disrupting the way governments make decisions and work while empowering citizens in the process.
The preconditions for OpenGov are almost here
In Latin America, connectivity rates are growing fast (reaching 61% in 2013 for the Americas as a whole), close to 90% of the population owns a cellphone, and access to higher levels of education keeps growing (as an example, the latest PISA report indicates that Mexico went from 58% in 2003 to 70% high-schoolers in 2012). The social conditions for a stronger role of citizens in government are increasingly there.
Moreover, most Latin American countries passed transparency laws during the 2000s, creating the enabling environment for open government initiatives to flourish. It is thus unsurprising that the next generation of young government bureaucrats, on average more internet-savvy and better educated than its predecessors, is taking over and embracing innovations in government. And they are finding echo (and suppliers of ideas and apps!) among local startups and civil society groups, while also being courted by large tech corporations (think of Google or Microsoft) behind succulent government contracts associated with this form of “doing good”.
This is an emerging galaxy of social innovators, technologically-savvy bureaucrats, and engaged citizens providing a large crowd-sourcing community and an opportunity to test different approaches. And the underlying tectonic shifts are pushing governments towards that direction. For a sampler, check out the latest developments for Brazil, Argentina, Peru, Mexico, Colombia, Paraguay, Chile, Panama, Costa Rica, Guatemala, Honduras, Dominican Republic, Uruguay and (why not?) my own country, which I will include in the review often for the surprisingly limited progress of open government in this OECD member, which shares similar institutions and challenges with Latin America.
A Road Full of Promise…and Obstacles
Most of the progress in Latin America is quite recent, and the real impact is still often more limited once you abandon the halls of the Digital Government directorates and secretarías or look if you look beyond the typical government data portal. The resistance to change is as human as laughing, but it is particularly intense among the public sector side of human beings. Politics also typically plays a enormous role in resisting transparency open government, and in a context of weak institutions and pervasive corruption, the temptation to politically block or water down open data/open government projects is just too high. Selective release of data (if any) is too frequent, government agencies often act as silos by not sharing information with other government departments, and irrational fears by policy-makers combined with adoption barriers (well explained here) all contribute to deter the progress of the open government promise in Latin America…”
US Secret Service seeks Twitter sarcasm detector
BBC: “The agency has put out a work tender looking for a software system to analyse social media data.
The software should have, among other things, the “ability to detect sarcasm and false positives”.
A spokesman for the service said it currently used the Federal Emergency Management Agency’s Twitter analytics and needed its own, adding: “We aren’t looking solely to detect sarcasm.”
The Washington Post quoted Ed Donovan as saying: “Our objective is to automate our social media monitoring process. Twitter is what we analyse.
“This is real-time stream analysis. The ability to detect sarcasm and false positives is just one of 16 or 18 things we are looking at.”…
The tender was put out earlier this week on the US government’s Federal Business Opportunities website.
It sets out the objectives of automating social media monitoring and “synthesising large sets of social media data”.
Specific requirements include “audience and geographic segmentation” and analysing “sentiment and trend”.
The software also has to have “compatibility with Internet Explorer 8”. The browser was released more than five years ago.
The agency does not detail the purpose of the analysis but does set out its mission, which includes “preserving the integrity of the economy and protecting national leaders and visiting heads of state and government”.
Open Data Is Open for Business
Jeffrey Stinson at Stateline: ” Last month, web designer Sean Wittmeyer and colleague Wojciech Magda walked away with a $25,000 prize from the state of Colorado for designing an online tool to help businesses decide where to locate in the state.
The tool, called “Beagle Score,” is a widget that can be embedded in online commercial real estate listings. It can rate a location by taxes and incentives, zoning, even the location of possible competitors – all derived from about 30 data sets posted publicly by the state of Colorado and its municipalities.
The creation of Beagle Score is an example of how states, cities, counties and the federal government are encouraging entrepreneurs to take raw government data posted on “open data” websites and turn the information into products the public will buy.
“The (Colorado contest) opened up a reason to use the data,” said Wittmeyer, 25, of Fort Collins. “It shows how ‘open data’ can solve a lot of challenges. … And absolutely, we can make it commercially viable. We can expand it to other states, and fairly quickly.”
Open-data advocates, such as President Barack Obama’s former information chief Vivek Kundra, estimate a multibillion-dollar industry can be spawned by taking raw government data files on sectors such as weather, population, energy, housing, commerce or transportation and turn them into products for the public to consume or other industries to pay for.
They can be as simple as mobile phone apps identifying every stop sign you will encounter on a trip to a different town, or as intricate as taking weather and crops data and turning it into insurance policies farmers can buy.
States, Cities Sponsor ‘Hackathons’
At least 39 states and 46 cities and counties have created open-data sites since the federal government, Utah, California and the cities of San Francisco and Washington, D.C., began opening data in 2009, according to the federal site, Data.gov.
Jeanne Holm, the federal government’s Data.gov “evangelist,” said new sites are popping up and new data are being posted almost daily. The city of Los Angeles, for example, opened a portal last week.
In March, Democratic New York Gov. Andrew Cuomo said that in the year since it was launched, his state’s site has grown to some 400 data sets with 50 million records from 45 agencies. Available are everything from horse injuries and deaths at state race tracks to maps of regulated child care centers. The most popular data: top fishing spots in the state.
State and local governments are sponsoring “hackathons,” “data paloozas,” and challenges like Colorado’s, inviting businesspeople, software developers, entrepreneurs or anyone with a laptop and a penchant for manipulating data to take part. Lexington, Kentucky, had a civic hackathon last weekend. The U.S. Transportation Department and members of the Geospatial Transportation Mapping Association had a three-day data palooza that ended Wednesday in Arlington, Virginia.
The goals of the events vary. Some, like Arlington’s transportation event, solicit ideas for how government can present its data more effectively. Others seek ideas for mining it.
Aldona Valicenti, Lexington’s chief information officer, said many cities want advice on how to use the data to make government more responsive to citizens, and to communicate with them on issues ranging from garbage pickups and snow removal to upcoming civic events.
Colorado and Wyoming had a joint hackathon last month sponsored by Google to help solve government problems. Colorado sought apps that might be useful to state emergency personnel in tracking people and moving supplies during floods, blizzards or other natural disasters. Wyoming sought help in making its tax-and-spend data more understandable and usable by its citizens.
Unless there’s some prize money, hackers may not make a buck from events like these, and participate out of fun, curiosity or a sense of public service. But those who create an app that is useful beyond the boundaries of a particular city or state, or one that is commercially valuable to business, can make serious money – just as Beagle Score plans to do. Colorado will hold onto the intellectual property rights to Beagle Score for a year. But Wittmeyer and his partner will be able to profit from extending it to other states.
States Trail in Open Data
Open data is an outgrowth of the e-government movement of the 1990s, in which government computerized more of the data it collected and began making it available on floppy disks.
States often have trailed the federal government or many cities in adjusting to the computer age and in sharing information, said Emily Shaw, national policy manager for the Sunlight Foundation, which promotes transparency in government. The first big push to share came with public accountability, or “checkbook” sites, that show where government gets its revenue and how it spends it.
The goal was to make government more transparent and accountable by offering taxpayers information on how their money was spent.
The Texas Comptroller of Public Accounts site, established in 2007, offers detailed revenue, spending, tax and contracts data. Republican Comptroller Susan Combs’ office said having a one-stop electronic site also has saved taxpayers about $12.3 million in labor, printing, postage and other costs.
Not all states’ checkbook sites are as openly transparent and detailed as Texas, Shaw said. Nor are their open-data sites. “There’s so much variation between the states,” she said.
Many state legislatures are working to set policies for releasing data. Since the start of 2010, according to the National Conference of State Legislatures, nine states have enacted open-data laws, and more legislation is pending. But California, for instance, has been posting open data for five years without legislation setting policies.
Just as states have lagged in getting data out to the public, less of it has been turned into commercial use, said Joel Gurin, senior adviser at the Governance Lab at New York University and author of the book “Open Data Now.”
Gurin leads Open Data 500, which identifies firms that that have made products from open government data and turned them into regional or national enterprises. In April, it listed 500. It soon may expand. “We’re finding more and more companies every day,” he said. “…
Making cities smarter through citizen engagement
Vaidehi Shah at Eco-Business: “Rapidly progressing information communications technology (ICT) is giving rise to an almost infinite range of innovations that can be implemented in cities to make them more efficient and better connected. However, in order for technology to yield sustainable solutions, planners must prioritise citizen engagement and strong leadership.
This was the consensus on Tuesday at the World Cities Summit 2014, where representatives from city and national governments, technology firms and private sector organisations gathered in Singapore to discuss strategies and challenges to achieving sustainable cities in the future.
Laura Ipsen, Microsoft corporate vice president for worldwide public sector, identified globalisation, social media, big data, and mobility as the four major technological trends prevailing in cities today, as she spoke at the plenary session with a theme on “The next urban decade: critical challenges and opportunities”.
Despite these increasing trends, she cautioned, “technology does not build infrastructure, but it does help better engage citizens and businesses through public-private partnerships”.
For example, “LoveCleanStreets”, an online tool developed by Microsoft and partners, enables London residents to report infrastructure problems such as damaged roads or signs, shared Ipsen.
“By engaging citizens through this application, cities can fix problems early, before they get worse,” she said.
In Singapore, the ‘MyWaters’ app of PUB, Singapore’s national water agency, is also a key tool for the government to keep citizens up-to-date of water quality and safety issues in the country, she added.
Even if governments did not actively develop solutions themselves, simply making the immense amounts of data collected by the city open to businesses and citizens could make a big difference to urban liveability, Mark Chandler, director of the San Francisco Mayor’s Office of International Trade and Commerce, pointed out.
Opening up all of the data collected by San Francisco, for instance, yielded 60 free mobile applications that allow residents to access urban solutions related to public transport, parking, and electricity, among others, he explained. This easy and convenient access to infrastructure and amenities, which are a daily necessity, is integral to “a quality of life that keeps the talented workforce in the city,” Chandler said….”