The Quantified Community and Neighborhood Labs: A Framework for Computational Urban Planning and Civic Technology Innovation


Constantine E. Kontokosta: “This paper presents the conceptual framework and justification for a “Quantified Community” (QC) and a networked experimental environment of neighborhood labs. The QC is a fully instrumented urban neighborhood that uses an integrated, expandable, and participatory sensor network to support the measurement, integration, and analysis of neighborhood conditions, social interactions and behavior, and sustainability metrics to support public decision-making. Through a diverse range of sensor and automation technologies — combined with existing data generated through administrative records, surveys, social media, and mobile sensors — information on human, physical, and environmental elements can be processed in real-time to better understand the interaction and effects of the built environment on human well-being and outcomes. The goal is to create an “informatics overlay” that can be incorporated into future urban development and planning that supports the benchmarking and evaluation of neighborhood conditions, provides a test-bed for measuring the impact of new technologies and policies, and responds to the changing needs and preferences of the local community….(More)”

Nudge 2.0


Philipp Hacker: “This essay is both a review of the excellent book “Nudge and the Law. A European Perspective”, edited by Alberto Alemanno and Anne-Lise Sibony, and an assessment of the major themes and challenges that the behavioural analysis of law will and should face in the immediate future.

The book makes important and novel contributions in a range of topics, both on a theoretical and a substantial level. Regarding theoretical issues, four themes stand out: First, it highlights the differences between the EU and the US nudging environments. Second, it questions the reliance on expertise in rulemaking. Third, it unveils behavioural trade-offs that have too long gone unnoticed in behavioural law and economics. And fourth, it discusses the requirement of the transparency of nudges and the related concept of autonomy. Furthermore, the different authors discuss the impact of behavioural regulation on a number of substantial fields of law: health and lifestyle regulation, privacy law, and the disclosure paradigm in private law.

This paper aims to take some of the book’s insights one step further in order to point at crucial challenges – and opportunities – for the future of the behavioural analysis of law. In the last years, the movement has gained tremendously in breadth and depth. It is now time to make it scientifically even more rigorous, e.g. by openly embracing empirical uncertainty and by moving beyond the neo-classical/behavioural dichotomy. Simultaneously, the field ought to discursively readjust its normative compass. Finally and perhaps most strikingly, however, the power of big data holds the promise of taking behavioural interventions to an entirely new level. If these challenges can be overcome, this paper argues, the intersection between law and behavioural sciences will remain one of the most fruitful approaches to legal analysis in Europe and beyond….(More)”

Big Data Privacy Scenarios


E. Bruce, K. Sollins, M. Vernon, and D. Weitzner at D-Space@MIT: “This paper is the first in a series on privacy in Big Data. As an outgrowth of a series of workshops on the topic, the Big Data Privacy Working Group undertook a study of a series of use scenarios to highlight the challenges to privacy that arise in the Big Data arena. This is a report on those scenarios. The deeper question explored by this exercise is what is distinctive about privacy in the context of Big Data. In addition, we discuss an initial list of issues for privacy that derive specifically from the nature of Big Data. These derive from observations across the real world scenarios and use cases explored in this project as well as wider reading and discussions:

* Scale: The sheer size of the datasets leads to challenges in creating, managing and applying privacy policies.

* Diversity: The increased likelihood of more and more diverse participants in Big Data collection, management, and use, leads to differing agendas and objectives. By nature, this is likely to lead to contradictory agendas and objectives.

* Integration: With increased data management technologies (e.g. cloud services, data lakes, and so forth), integration across datasets, with new and often surprising opportunities for cross-product inferences, will also come new information about individuals and their behaviors.

* Impact on secondary participants: Because many pieces of information are reflective of not only the targeted subject, but secondary, often unattended, participants, the inferences and resulting information will increasingly be reflective of other people, not originally considered as the subject of privacy concerns and approaches.

* Need for emergent policies for emergent information: As inferences over merged data sets occur, emergent information or understanding will occur.

Although each unique data set may have existing privacy policies and enforcement mechanisms, it is not clear that it is possible to develop the requisite and appropriate emerged privacy policies and appropriate enforcement of them automatically…(More)”

The multiple meanings of open government data: Understanding different stakeholders and their perspectives


Paper by Felipe Gonzalez-Zapata, and Richard Heeks in Government Information Quarterly: “As a field of practice and research that is fast-growing and a locus for much attention and activity, open government data (OGD) has attracted stakeholders from a variety of origins. They bring with them a variety of meanings for OGD. The purpose of this paper is to show how the different stakeholders and their different perspectives on OGD can be analyzed in a given context. Taking Chile as an OGD exemplar, stakeholder analysis is used to identify and categorize stakeholder groups in terms of their relative power and interest as either primary (in this case, politicians, public officials, public sector practitioners, international organizations) or secondary (civil society activists, funding donors, ICT providers, academics). Stakeholder groups sometimes associated with OGD but absent from significant involvement in Chile – such as private sector- and citizen-users – are also identified.

Four different perspectives on open government data – bureaucratic, political, technological, and economic – are identified from a literature review. Template analysis is used to analyze text – OGD-related reports, conference presentations, and interviews in Chile – in terms of those perspectives. This shows bureaucratic and political perspectives to be more dominant than the other two, and also some presence for a politico-economic perspective not identified from the original literature review. The information value chain is used to identify a “missing middle” in current Chilean OGD perspectives: a lack of connection between a reality of data provision and an aspiration of developmental results. This pattern of perspectives can be explained by the capacities and interests of key stakeholders, with those in turn being shaped by Chile’s history, politics, and institutions….(More)”

Open collaboration in the public sector: The case of social coding on GitHub


Paper by Ines Mergel at Government Information Quarterly: “Open collaboration has evolved as a new form of innovation creation in the public sector. Government organizations are using online platforms to collaborative create or contribute to public sector innovations with the help of external and internal problem solvers. Most recently the U.S. federal government has encouraged agencies to collaboratively create and share open source code on the social coding platform GitHub and allow third parties to share their changes to the code. A community of government employees is using the social coding site GitHub to share open source code for software and website development, distribution of data sets and research results, or to seek input to draft policy documents. Quantitative data extracted from GitHub’s application programming interface is used to analyze the collaboration ties between contributors to government repositories and their reuse of digital products developed on GitHub by other government entities in the U.S. federal government. In addition, qualitative interviews with government contributors in this social coding environment provide insights into new forms of co-development of open source digital products in the public sector….(More)”

Gamification and Sustainable Consumption: Overcoming the Limitations of Persuasive Technologies


Paper by Martina Z. Huber and Lorenz M. Hilty: “The current patterns of production and consumption in the industrialized world are not sustainable. The goods and services we consume cause resource extractions, greenhouse gas emissions and other environmental impacts that are already affecting the conditions of living on Earth. To support the transition toward sustainable consumption patterns, ICT applications that persuade consumers to change their behavior into a “green” direction have been developed in the field of Persuasive Technology (PT).

Such persuasive systems, however, have been criticized for two reasons. First, they are often based on the assumption that information (e.g., information on individual energy consumption) causes behavior change, or a change in awareness and attitude that then changes behavior. Second, PT approaches assume that the designer of the system starts from objective criteria for “sustainable” behavior and is able to operationalize them in the context of the application.

In this chapter, we are exploring the potential of gamification to overcome the limitations of persuasive systems. Gamification, the process of using game elements in a non-game context, opens up a broader design space for ICT applications created to support sustainable consumption. In particular, a gamification-based approach may give the user more autonomy in selecting goals and relating individual action to social interaction. The idea of gamification may also help designers to view the user’s actions in a broader context and to recognize the relevance of different motivational aspects of social interaction, such as competition and cooperation. Based on this discussion we define basic requirements to be used as guidance in gamificationbased motivation design for sustainable consumption….(More)”

This free online encyclopedia has achieved what Wikipedia can only dream of


Nikhil Sonnad at Quartz: “The Stanford Encyclopedia of Philosophy may be the most interesting website on the internet. Not because of the content—which includes fascinating entries on everything from ambiguity to zombies—but because of the site itself.

Its creators have solved one of the internet’s fundamental problems: How to provide authoritative, rigorously accurate knowledge, at no cost to readers. It’s something the encyclopedia, or SEP, has managed to do for two decades.

The internet is an information landfill. Somewhere in it—buried under piles of opinion, speculation, and misinformation—is virtually all of human knowledge. The story of the SEP shows that it is possible to create a less trashy internet.  But sorting through the trash is difficult work. Even when you have something you think is valuable, it often turns out to be a cheap knock-off.

The story of how the SEP is run, and how it came to be, shows that it is possible to create a less trashy internet—or at least a less trashy corner of it. A place where actual knowledge is sorted into a neat, separate pile instead of being thrown into the landfill. Where the world can go to learn everything that we know to be true. Something that would make humans a lot smarter than the internet we have today.

The impossible trinity of information

The online SEP has humble beginnings. Edward Zalta, a philosopher at Stanford’s Center for the Study of Language and Information, launched it way back in September 1995, with just two entries.

Philosophizing, pre-internet.(Flickr/Erik Drost—CC-BY-2.0)

That makes it positively ancient in internet years. Even Wikipedia is only 14. ….

John Perry, the director of the center, was the one who first suggested a dictionary of philosophical terms. But Zalta had bigger ideas. He and two co-authors later described the challenge in a 2002 paper (pdf, p. 1):

A fundamental problem faced by the general public and the members of an academic discipline in the information age is how to find the most authoritative, comprehensive, and up-to-date information about an important topic.

That paper is so old that it mentions “CD-ROMs” in the second sentence. But for all the years that have passed, the basic problem remains unsolved.  The requirements are an “impossible trinity”—like having your cake, eating it, and then bringing it to another party. The three requirements the authors list—”authoritative, comprehensive, and up-to-date”—are to information what the “impossible trinity” is to economics. You can only ever have one or two at once. It is like having your cake, eating it, and then bringing it to another party.

Yet if the goal is to share with people what is true, it is extremely important for a resource to have all of these things. It must be trusted. It must not leave anything out. And it must reflect the latest state of knowledge. Unfortunately, all of the other current ways of designing an encyclopedia very badly fail to meet at least one of these requirements.

Where other encyclopedias fall short

Book

Authoritative: √

Comprehensive: X

Up-to-date: X

Printed encyclopedias: still a thing(Princeton University Press)

Printed books are authoritative: Readers trust articles they know have been written and edited by experts. Books also produce a coherent overview of a subject, as the editors consider how each entry fits into the whole. But they become obsolete whenever new research comes out. Nor can a book (or even a set of volumes) be comprehensive, except perhaps for a very narrow discipline; there’s simply too much to print.

Crowdsourcing

Authoritative: X

Comprehensive: X

Up-to-date: √

A crowdsourced online encyclopedia has the virtue of timeliness. Thanks to Wikipedia’s vibrant community of non-experts, its entries on breaking-news events are often updated as they happen. But except perhaps in a few areas in which enough well-informed people care for errors to get weeded out, Wikipedia is not authoritative.  Basic mathematics entries on Wikipedia were a “a hot mess of error, arrogance, obscurity, and nonsense.”  One math professor reviewed basic mathematics entries and found them to be a “a hot mess of error, arrogance, obscurity, and nonsense.” Nor is it comprehensive: Though it has nearly 5 million articles in the English-language version alone, seemingly in every sphere of knowledge, fewer than 10,000 are “A-class” or better, the status awarded to articles considered “essentially complete.”

Speaking of holes, the SEP has a rather detailed entry on the topic of holes, and it rather nicely illustrates one of Wikipedia’s key shortcomings. Holes present a tricky philosophical problem, the SEP entry explains: A hole is nothing, but we refer to it as if it were something. (Achille Varzi, the author of the holes entry, was called upon in the US presidential election in 2000 toweigh in on the existential status of hanging chads.) If you ask Wikipedia for holes it gives you the young-adult novel Holes and the band Hole.

In other words, holes as philosophical notions are too abstract for a crowdsourced venue that favors clean, factual statements like a novel’s plot or a band’s discography. Wikipedia’s bottom-up model could never produce an entry on holes like the SEP’s.

Crowdsourcing + voting

Authoritative: ?

Comprehensive: X

Up-to-date: ?

A variation on the wiki model is question-and-answer sites like Quora (general interest) and StackOverflow (computer programming), on which users can pose questions and write answers. These are slightly more authoritative than Wikipedia, because users also vote answers up or down according to how helpful they find them; and because answers are given by single, specific users, who are encouraged to say why they’re qualified (“I’m a UI designer at Google,” say).

But while there are sometimes ways to check people’s accreditation, it’s largely self-reported and unverified. Moreover, these sites are far from comprehensive. Any given answer is only as complete as its writer decides or is able to make it. And the questions asked and answered tend to reflect the interests of the sites’ users, which in both Quora and StackOverflow’s cases skew heavily male, American, and techie.

Moreover, the sites aren’t up-to-date. While they may respond quickly to new events, answers that become outdated aren’t deleted or changed but stay there, burdening the site with a growing mass of stale information.

The Stanford solution

So is the impossible trinity just that—impossible? Not according to Zalta. He imagined a different model for the SEP: the “dynamic reference work.”

Dynamic reference work

Authoritative: √

Comprehensive: √

Up-to-date: √

To achieve authority, several dozen subject editors—responsible for broad areas like “ancient philosophy” or “formal epistemology”—identify topics in need of coverage, and invite qualified philosophers to write entries on them. If the invitation is accepted, the author sends an outline to the relevant subject editors.

 This is not somebody randomly deciding to answer a question on Quora. “An editor works with the author to get an optimal outline before the author begins to write,” says Susanna Siegel, subject editor for philosophy of mind. “Sometimes there is a lot of back and forth at this stage.” Editors may also reject entries. Zalta and Uri Nodelman, the SEP’s senior editor, say that this almost never happens. In the rare cases when it does, the reason is usually that an entry is overly biased. In short, this is not somebody randomly deciding to answer a question on Quora.

An executive editorial board—Zalta, Nodelman, and Colin Allen—works to make the SEP comprehensive….(More)”

Collective Intelligence Meets Medical Decision-Making


Paper by Max Wolf, Jens Krause, Patricia A. Carney, Andy Bogart, Ralf H. Kurvers indicating that “The Collective Outperforms the Best Radiologist”: “While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence….(More)”

The Data Revolution for Sustainable Development


Jeffrey D. Sachs at Project Syndicate: “There is growing recognition that the success of the Sustainable Development Goals (SDGs), which will be adopted on September 25 at a special United Nations summit, will depend on the ability of governments, businesses, and civil society to harness data for decision-making…

One way to improve data collection and use for sustainable development is to create an active link between the provision of services and the collection and processing of data for decision-making. Take health-care services. Every day, in remote villages of developing countries, community health workers help patients fight diseases (such as malaria), get to clinics for checkups, receive vital immunizations, obtain diagnoses (through telemedicine), and access emergency aid for their infants and young children (such as for chronic under-nutrition). But the information from such visits is usually not collected, and even if it is put on paper, it is never used again.
We now have a much smarter way to proceed. Community health workers are increasingly supported by smart-phone applications, which they can use to log patient information at each visit. That information can go directly onto public-health dashboards, which health managers can use to spot disease outbreaks, failures in supply chains, or the need to bolster technical staff. Such systems can provide a real-time log of vital events, including births and deaths, and even use so-called verbal autopsies to help identify causes of death. And, as part of electronic medical records, the information can be used at future visits to the doctor or to remind patients of the need for follow-up visits or medical interventions….
Fortunately, the information and communications technology revolution and the spread of broadband coverage nearly everywhere can quickly make such time lags a thing of the past. As indicated in the report A World that Counts: Mobilizing the Data Revolution for Sustainable Development, we must modernize the practices used by statistical offices and other public agencies, while tapping into new sources of data in a thoughtful and creative way that complements traditional approaches.
Through more effective use of smart data – collected during service delivery, economic transactions, and remote sensing – the fight against extreme poverty will be bolstered; the global energy system will be made much more efficient and less polluting; and vital services such as health and education will be made far more effective and accessible.
With this breakthrough in sight, several governments, including that of the United States, as well as businesses and other partners, have announced plans to launch a new “Global Partnership for Sustainable Development Data” at the UN this month. The new partnership aims to strengthen data collection and monitoring efforts by raising more funds, encouraging knowledge-sharing, addressing key barriers to access and use of data, and identifying new big-data strategies to upgrade the world’s statistical systems.
The UN Sustainable Development Solutions Network will support the new Global Partnership by creating a new Thematic Network on Data for Sustainable Development, which will bring together leading data scientists, thinkers, and academics from across multiple sectors and disciplines to form a center of data excellence….(More)”

Sustainable Value of Open Government Data


Phd Thesis from Thorhildur Jetzek: “The impact of the digital revolution on our societies can be compared to the ripples caused by a stone thrown in water: spreading outwards and affecting a larger and larger part of our lives with every year that passes. One of the many effects of this revolution is the emergence of an already unprecedented amount of digital data that is accumulating exponentially. Moreover, a central affordance of digitization is the ability to distribute, share and collaborate, and we have thus seen an “open theme” gaining currency in recent years. These trends are reflected in the explosion of Open Data Initiatives (ODIs) around the world. However, while hundreds of national and local governments have established open data portals, there is a general feeling that these ODIs have not yet lived up to their true potential. This feeling is not without good reason; the recent Open Data Barometer report highlights that strong evidence on the impacts of open government data is almost universally lacking (Davies, 2013). This lack of evidence is disconcerting for government organizations that have already expended money on opening data, and might even result in the termination of some ODIs. This lack of evidence also raises some relevant questions regarding the nature of value generation in the context of free data and sharing of information over networks. Do we have the right methods, the right intellectual tools, to understand and reflect the value that is generated in such ecosystems?

This PhD study addresses the question of How is value generated from open data? through a mixed methods, macro-level approach. For the qualitative analysis, I have conducted two longitudinal case studies in two different contexts. The first is the case of the Basic Data Program (BDP), which is a Danish ODI. For this case, I studied the supply-side of open data publication, from the creation of open data strategy towards the dissemination and use of data. The second case is a demand-side study on the energy tech company Opower. Opower has been an open data user for many years and have used open data to create and disseminate personalized information on energy use. This information has already contributed to a measurable world-wide reduction in CO2 emissions as well as monetary savings. Furthermore, to complement the insights from these two cases I analyzed quantitative data from 76 countries over the years 2012 and 2013. I have used these diverse sources of data to uncover the most important relationships or mechanisms, that can explain how open data are used to generate sustainable value….(More)”