Viscous Open Data: The Roles of Intermediaries in an Open Data Ecosystem


François van Schalkwyk, Michelle Willmers & Maurice McNaughton in Journal: “Information Technology for Development”: “Open data have the potential to improve the governance of universities as public institutions. In addition, open data are likely to increase the quality, efficacy and efficiency of the research and analysis of higher education systems by providing a shared empirical base for critical interrogation and reinterpretation. Drawing on research conducted by the Emerging Impacts of Open Data in Developing Countries project, and using an ecosystems approach, this research paper considers the supply, demand and use of open data as well as the roles of intermediaries in the governance of South African public higher education. It shows that government’s higher education database is a closed and isolated data source in the data ecosystem; and that the open data that are made available by government is inaccessible and rarely used. In contrast, government data made available by data intermediaries in the ecosystem are being used by key stakeholders. Intermediaries are found to play several important roles in the ecosystem: (i) they increase the accessibility and utility of data; (ii) they may assume the role of a “keystone species” in a data ecosystem; and (iii) they have the potential to democratize the impacts and use of open data. The article concludes that despite poor data provision by government, the public university governance open data ecosystem has evolved because intermediaries in the ecosystem have reduced the viscosity of government data. Further increasing the fluidity of government open data will improve access and ensure the sustainability of open data supply in the ecosystem….(More)”

A new model to explore non-profit social media use for advocacy and civic engagement


David Chapman, Katrina Miller-Stevens, John C Morris, and Brendan O’Hallarn in First Monday: “In an age when electronic communication is ubiquitous, non-profit organizations are actively using social media platforms as a way to deliver information to end users. In spite of the broad use of these platforms, little scholarship has focused on the internal processes these organizations employ to implement these tools. A limited number of studies offer models to help explain an organization’s use of social media from initiation to outcomes, yet few studies address a non-profit organization’s mission as the driver to employ social media strategies and tactics. Furthermore, the effectiveness of social media use is difficult for non-profit organizations to measure. Studies that attempt to address this question have done so by viewing social media platform analytics (e.g., Facebook analytics) or analyzing written content by users of social media (Nah and Saxton, 2013; Auger, 2013; Uzunoğlu and Misci Kip, 2014; or Guo and Saxton, 2014). The value added of this study is to present a model for practice (Weil, 1997) that explores social media use and its challenges from a non-profit organization’s mission through its desired outcome, in this case an outcome of advocacy and civic engagement.

We focus on one non-profit organization, Blue Star Families, that actively engages in advocacy and civic engagement. Blue Star Families was formed in 2009 to “raise the awareness of the challenges of military family life with our civilian communities and leaders” (Blue Star Families, 2010). Blue Star Families is a virtual organization with no physical office location. Thus, the organization relies on its Web presence and social media tools to advocate for military families and engage service members and their families, communities, and citizens in civic engagement activities (Blue Star Families, 2010).

The study aims to provide organizational-level insights of the successes and challenges of working in the social media environment. Specifically, the study asks: What are the processes non-profit organizations follow to link organizational mission to outcomes when using social media platforms? What are the successes and challenges of using social media platforms for advocacy and civic engagement purposes? In our effort to answer these questions, we present a new model to explore non-profit organizations’ use of social media platforms by building on previous models and frameworks developed to explore the use of social media in the public, private, and non-profit sectors.

This research is important for three reasons. First, most previous studies of social media tend to employ models that focus on the satisfaction of the social media tools for organizational members, rather than the utility of social media as a tool to meet organizational goals. Our research offers a means to explore the utility of social media from an organization perspective. Second, the exemplar case for our research, Blue Star Families, Inc., is a non-profit organization whose mission is to create and nurture a virtual community spread over a large geographical — if not global — area. Because Blue Star Families was founded as an online organization that could not exist without social media, it provides a case for which social media is a critical component of the organization’s activity. Finally, we offer some “lessons learned” from our case to identify issues for other organizations seeking to create a significant social media presence.

This paper is organized as follows: first, the growth of social media is briefly addressed to provide background context. Second, previous models and frameworks exploring social media are discussed. This is followed by a presentation of a new model exploring the use of social media from an organizational perspective, starting with the driver of a non-profit organization’s mission, to its desired outcomes of advocacy and civic engagement. Third, the case study methodology is explained. Next, we present an analysis and discussion applying the new model to Blue Star Families’ use of social media platforms. We conclude by discussing the challenges of social media revealed in the case study analysis, and we offer recommendations to address these challenges….(More)”

The Quantified Community and Neighborhood Labs: A Framework for Computational Urban Planning and Civic Technology Innovation


Constantine E. Kontokosta: “This paper presents the conceptual framework and justification for a “Quantified Community” (QC) and a networked experimental environment of neighborhood labs. The QC is a fully instrumented urban neighborhood that uses an integrated, expandable, and participatory sensor network to support the measurement, integration, and analysis of neighborhood conditions, social interactions and behavior, and sustainability metrics to support public decision-making. Through a diverse range of sensor and automation technologies — combined with existing data generated through administrative records, surveys, social media, and mobile sensors — information on human, physical, and environmental elements can be processed in real-time to better understand the interaction and effects of the built environment on human well-being and outcomes. The goal is to create an “informatics overlay” that can be incorporated into future urban development and planning that supports the benchmarking and evaluation of neighborhood conditions, provides a test-bed for measuring the impact of new technologies and policies, and responds to the changing needs and preferences of the local community….(More)”

Nudge 2.0


Philipp Hacker: “This essay is both a review of the excellent book “Nudge and the Law. A European Perspective”, edited by Alberto Alemanno and Anne-Lise Sibony, and an assessment of the major themes and challenges that the behavioural analysis of law will and should face in the immediate future.

The book makes important and novel contributions in a range of topics, both on a theoretical and a substantial level. Regarding theoretical issues, four themes stand out: First, it highlights the differences between the EU and the US nudging environments. Second, it questions the reliance on expertise in rulemaking. Third, it unveils behavioural trade-offs that have too long gone unnoticed in behavioural law and economics. And fourth, it discusses the requirement of the transparency of nudges and the related concept of autonomy. Furthermore, the different authors discuss the impact of behavioural regulation on a number of substantial fields of law: health and lifestyle regulation, privacy law, and the disclosure paradigm in private law.

This paper aims to take some of the book’s insights one step further in order to point at crucial challenges – and opportunities – for the future of the behavioural analysis of law. In the last years, the movement has gained tremendously in breadth and depth. It is now time to make it scientifically even more rigorous, e.g. by openly embracing empirical uncertainty and by moving beyond the neo-classical/behavioural dichotomy. Simultaneously, the field ought to discursively readjust its normative compass. Finally and perhaps most strikingly, however, the power of big data holds the promise of taking behavioural interventions to an entirely new level. If these challenges can be overcome, this paper argues, the intersection between law and behavioural sciences will remain one of the most fruitful approaches to legal analysis in Europe and beyond….(More)”

Big Data Privacy Scenarios


E. Bruce, K. Sollins, M. Vernon, and D. Weitzner at D-Space@MIT: “This paper is the first in a series on privacy in Big Data. As an outgrowth of a series of workshops on the topic, the Big Data Privacy Working Group undertook a study of a series of use scenarios to highlight the challenges to privacy that arise in the Big Data arena. This is a report on those scenarios. The deeper question explored by this exercise is what is distinctive about privacy in the context of Big Data. In addition, we discuss an initial list of issues for privacy that derive specifically from the nature of Big Data. These derive from observations across the real world scenarios and use cases explored in this project as well as wider reading and discussions:

* Scale: The sheer size of the datasets leads to challenges in creating, managing and applying privacy policies.

* Diversity: The increased likelihood of more and more diverse participants in Big Data collection, management, and use, leads to differing agendas and objectives. By nature, this is likely to lead to contradictory agendas and objectives.

* Integration: With increased data management technologies (e.g. cloud services, data lakes, and so forth), integration across datasets, with new and often surprising opportunities for cross-product inferences, will also come new information about individuals and their behaviors.

* Impact on secondary participants: Because many pieces of information are reflective of not only the targeted subject, but secondary, often unattended, participants, the inferences and resulting information will increasingly be reflective of other people, not originally considered as the subject of privacy concerns and approaches.

* Need for emergent policies for emergent information: As inferences over merged data sets occur, emergent information or understanding will occur.

Although each unique data set may have existing privacy policies and enforcement mechanisms, it is not clear that it is possible to develop the requisite and appropriate emerged privacy policies and appropriate enforcement of them automatically…(More)”

The multiple meanings of open government data: Understanding different stakeholders and their perspectives


Paper by Felipe Gonzalez-Zapata, and Richard Heeks in Government Information Quarterly: “As a field of practice and research that is fast-growing and a locus for much attention and activity, open government data (OGD) has attracted stakeholders from a variety of origins. They bring with them a variety of meanings for OGD. The purpose of this paper is to show how the different stakeholders and their different perspectives on OGD can be analyzed in a given context. Taking Chile as an OGD exemplar, stakeholder analysis is used to identify and categorize stakeholder groups in terms of their relative power and interest as either primary (in this case, politicians, public officials, public sector practitioners, international organizations) or secondary (civil society activists, funding donors, ICT providers, academics). Stakeholder groups sometimes associated with OGD but absent from significant involvement in Chile – such as private sector- and citizen-users – are also identified.

Four different perspectives on open government data – bureaucratic, political, technological, and economic – are identified from a literature review. Template analysis is used to analyze text – OGD-related reports, conference presentations, and interviews in Chile – in terms of those perspectives. This shows bureaucratic and political perspectives to be more dominant than the other two, and also some presence for a politico-economic perspective not identified from the original literature review. The information value chain is used to identify a “missing middle” in current Chilean OGD perspectives: a lack of connection between a reality of data provision and an aspiration of developmental results. This pattern of perspectives can be explained by the capacities and interests of key stakeholders, with those in turn being shaped by Chile’s history, politics, and institutions….(More)”

Open collaboration in the public sector: The case of social coding on GitHub


Paper by Ines Mergel at Government Information Quarterly: “Open collaboration has evolved as a new form of innovation creation in the public sector. Government organizations are using online platforms to collaborative create or contribute to public sector innovations with the help of external and internal problem solvers. Most recently the U.S. federal government has encouraged agencies to collaboratively create and share open source code on the social coding platform GitHub and allow third parties to share their changes to the code. A community of government employees is using the social coding site GitHub to share open source code for software and website development, distribution of data sets and research results, or to seek input to draft policy documents. Quantitative data extracted from GitHub’s application programming interface is used to analyze the collaboration ties between contributors to government repositories and their reuse of digital products developed on GitHub by other government entities in the U.S. federal government. In addition, qualitative interviews with government contributors in this social coding environment provide insights into new forms of co-development of open source digital products in the public sector….(More)”

Gamification and Sustainable Consumption: Overcoming the Limitations of Persuasive Technologies


Paper by Martina Z. Huber and Lorenz M. Hilty: “The current patterns of production and consumption in the industrialized world are not sustainable. The goods and services we consume cause resource extractions, greenhouse gas emissions and other environmental impacts that are already affecting the conditions of living on Earth. To support the transition toward sustainable consumption patterns, ICT applications that persuade consumers to change their behavior into a “green” direction have been developed in the field of Persuasive Technology (PT).

Such persuasive systems, however, have been criticized for two reasons. First, they are often based on the assumption that information (e.g., information on individual energy consumption) causes behavior change, or a change in awareness and attitude that then changes behavior. Second, PT approaches assume that the designer of the system starts from objective criteria for “sustainable” behavior and is able to operationalize them in the context of the application.

In this chapter, we are exploring the potential of gamification to overcome the limitations of persuasive systems. Gamification, the process of using game elements in a non-game context, opens up a broader design space for ICT applications created to support sustainable consumption. In particular, a gamification-based approach may give the user more autonomy in selecting goals and relating individual action to social interaction. The idea of gamification may also help designers to view the user’s actions in a broader context and to recognize the relevance of different motivational aspects of social interaction, such as competition and cooperation. Based on this discussion we define basic requirements to be used as guidance in gamificationbased motivation design for sustainable consumption….(More)”

This free online encyclopedia has achieved what Wikipedia can only dream of


Nikhil Sonnad at Quartz: “The Stanford Encyclopedia of Philosophy may be the most interesting website on the internet. Not because of the content—which includes fascinating entries on everything from ambiguity to zombies—but because of the site itself.

Its creators have solved one of the internet’s fundamental problems: How to provide authoritative, rigorously accurate knowledge, at no cost to readers. It’s something the encyclopedia, or SEP, has managed to do for two decades.

The internet is an information landfill. Somewhere in it—buried under piles of opinion, speculation, and misinformation—is virtually all of human knowledge. The story of the SEP shows that it is possible to create a less trashy internet.  But sorting through the trash is difficult work. Even when you have something you think is valuable, it often turns out to be a cheap knock-off.

The story of how the SEP is run, and how it came to be, shows that it is possible to create a less trashy internet—or at least a less trashy corner of it. A place where actual knowledge is sorted into a neat, separate pile instead of being thrown into the landfill. Where the world can go to learn everything that we know to be true. Something that would make humans a lot smarter than the internet we have today.

The impossible trinity of information

The online SEP has humble beginnings. Edward Zalta, a philosopher at Stanford’s Center for the Study of Language and Information, launched it way back in September 1995, with just two entries.

Philosophizing, pre-internet.(Flickr/Erik Drost—CC-BY-2.0)

That makes it positively ancient in internet years. Even Wikipedia is only 14. ….

John Perry, the director of the center, was the one who first suggested a dictionary of philosophical terms. But Zalta had bigger ideas. He and two co-authors later described the challenge in a 2002 paper (pdf, p. 1):

A fundamental problem faced by the general public and the members of an academic discipline in the information age is how to find the most authoritative, comprehensive, and up-to-date information about an important topic.

That paper is so old that it mentions “CD-ROMs” in the second sentence. But for all the years that have passed, the basic problem remains unsolved.  The requirements are an “impossible trinity”—like having your cake, eating it, and then bringing it to another party. The three requirements the authors list—”authoritative, comprehensive, and up-to-date”—are to information what the “impossible trinity” is to economics. You can only ever have one or two at once. It is like having your cake, eating it, and then bringing it to another party.

Yet if the goal is to share with people what is true, it is extremely important for a resource to have all of these things. It must be trusted. It must not leave anything out. And it must reflect the latest state of knowledge. Unfortunately, all of the other current ways of designing an encyclopedia very badly fail to meet at least one of these requirements.

Where other encyclopedias fall short

Book

Authoritative: √

Comprehensive: X

Up-to-date: X

Printed encyclopedias: still a thing(Princeton University Press)

Printed books are authoritative: Readers trust articles they know have been written and edited by experts. Books also produce a coherent overview of a subject, as the editors consider how each entry fits into the whole. But they become obsolete whenever new research comes out. Nor can a book (or even a set of volumes) be comprehensive, except perhaps for a very narrow discipline; there’s simply too much to print.

Crowdsourcing

Authoritative: X

Comprehensive: X

Up-to-date: √

A crowdsourced online encyclopedia has the virtue of timeliness. Thanks to Wikipedia’s vibrant community of non-experts, its entries on breaking-news events are often updated as they happen. But except perhaps in a few areas in which enough well-informed people care for errors to get weeded out, Wikipedia is not authoritative.  Basic mathematics entries on Wikipedia were a “a hot mess of error, arrogance, obscurity, and nonsense.”  One math professor reviewed basic mathematics entries and found them to be a “a hot mess of error, arrogance, obscurity, and nonsense.” Nor is it comprehensive: Though it has nearly 5 million articles in the English-language version alone, seemingly in every sphere of knowledge, fewer than 10,000 are “A-class” or better, the status awarded to articles considered “essentially complete.”

Speaking of holes, the SEP has a rather detailed entry on the topic of holes, and it rather nicely illustrates one of Wikipedia’s key shortcomings. Holes present a tricky philosophical problem, the SEP entry explains: A hole is nothing, but we refer to it as if it were something. (Achille Varzi, the author of the holes entry, was called upon in the US presidential election in 2000 toweigh in on the existential status of hanging chads.) If you ask Wikipedia for holes it gives you the young-adult novel Holes and the band Hole.

In other words, holes as philosophical notions are too abstract for a crowdsourced venue that favors clean, factual statements like a novel’s plot or a band’s discography. Wikipedia’s bottom-up model could never produce an entry on holes like the SEP’s.

Crowdsourcing + voting

Authoritative: ?

Comprehensive: X

Up-to-date: ?

A variation on the wiki model is question-and-answer sites like Quora (general interest) and StackOverflow (computer programming), on which users can pose questions and write answers. These are slightly more authoritative than Wikipedia, because users also vote answers up or down according to how helpful they find them; and because answers are given by single, specific users, who are encouraged to say why they’re qualified (“I’m a UI designer at Google,” say).

But while there are sometimes ways to check people’s accreditation, it’s largely self-reported and unverified. Moreover, these sites are far from comprehensive. Any given answer is only as complete as its writer decides or is able to make it. And the questions asked and answered tend to reflect the interests of the sites’ users, which in both Quora and StackOverflow’s cases skew heavily male, American, and techie.

Moreover, the sites aren’t up-to-date. While they may respond quickly to new events, answers that become outdated aren’t deleted or changed but stay there, burdening the site with a growing mass of stale information.

The Stanford solution

So is the impossible trinity just that—impossible? Not according to Zalta. He imagined a different model for the SEP: the “dynamic reference work.”

Dynamic reference work

Authoritative: √

Comprehensive: √

Up-to-date: √

To achieve authority, several dozen subject editors—responsible for broad areas like “ancient philosophy” or “formal epistemology”—identify topics in need of coverage, and invite qualified philosophers to write entries on them. If the invitation is accepted, the author sends an outline to the relevant subject editors.

 This is not somebody randomly deciding to answer a question on Quora. “An editor works with the author to get an optimal outline before the author begins to write,” says Susanna Siegel, subject editor for philosophy of mind. “Sometimes there is a lot of back and forth at this stage.” Editors may also reject entries. Zalta and Uri Nodelman, the SEP’s senior editor, say that this almost never happens. In the rare cases when it does, the reason is usually that an entry is overly biased. In short, this is not somebody randomly deciding to answer a question on Quora.

An executive editorial board—Zalta, Nodelman, and Colin Allen—works to make the SEP comprehensive….(More)”

Collective Intelligence Meets Medical Decision-Making


Paper by Max Wolf, Jens Krause, Patricia A. Carney, Andy Bogart, Ralf H. Kurvers indicating that “The Collective Outperforms the Best Radiologist”: “While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence….(More)”