Index: Designing for Behavior Change


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on designing for behavior change and was originally published in 2014.

  • Year the Behavioural Insights or “Nudge” Team was established by David Cameron in the U.K.: 2010
  • Amount saved by the U.K. Courts Service a year by sending people owing fines personalized text messages to persuade them to pay promptly since the creation of the Nudge unit: £30m
    • Entire budget for the Behavioural Insights Team: less than £1 million
    • Estimated reduction in bailiff interventions through the use of personalized text reminders: 150,000 fewer interventions annually
  • Percentage increase among British residents who paid their taxes on time when they received a letter saying that most citizens in their neighborhood pay their taxes on time: 15%
  • Estimated increase in organ-donor registrations in the U.K. if people are asked “If you needed an organ transplant, would you take one?”: 96,000
  • Proportion of employees who now have a workplace pension since the U.K. government switched from opt-in to opt-out (illustrating the power of defaults): 83%, 63% before opt-out
  • Increase in 401(k) enrollment rates within the U.S. by changing the default from ‘opt in’ to ‘opt out’: from 13% to 80%
  • Behavioral studies have shown that consumers overestimate savings from credit cards with no annual fees. Reduction in overall borrowing costs to consumers by requiring card issuers to tell consumers how much it would cost them in fees and interest, under the 2009 CARD Act in the U.S.: 1.7% of average daily balances 
  • Many high school students and their families in the U.S. find financial aid forms for college complex and thus delay filling them out. Increase in college enrollment as a result of being helped to complete the FAFSA financial aid form by an H&R tax professional, who then provided immediate estimates of the amount of aid the student was eligible for, and the net tuition cost of four nearby public colleges: 26%
  • How much more likely people are to keep accounting records, calculate monthly revenues, and separate their home and business books if given “rules of thumb”-based training with regards to managing their finances, according to a randomized control trial conducted in a bank in the Dominican Republic: 10%
  • Elderly Americans are asked to choose from over 40 options when enrolling in Medicaid Part D private drug plans. How many switched plans to save money when they received a letter providing information about three plans that would be cheaper for them: almost double 
    • The amount saved on average per person by switching plans due to this intervention: $150 per year
  • Increase in prescriptions to manage cardiac disease when Medicaid enrollees are sent a suite of behavioral nudges such as more salient description of the consequences of remaining untreated and post-it note reminders during an experiment in the U.S.: 78%
  • Reduction in street-litter when a trail of green footprints leading to nearby garbage cans is stenciled on the ground during an experiment in Copenhagen, Denmark: 46%
  • Reduction in missed National Health Service appointments in the U.K. when patients are asked to fill out their own appointment cards: 18%
    • Reduction in missed appointments when patients are also made aware of the number of people who attend their appointments on time: 31%
    • The cost of non-attendance per year for the National Health Service: £700m 
  • How many people in a U.S. experiment chose to ‘downsize’ their meals when asked, regardless of whether they received a discount for the smaller portion: 14-33%
    • Average reduction in calories as a result of downsizing: 200
  • Number of households in the U.K. without properly insulated attics, leading to high energy consumption and bills: 40%
    • Result of offering group discounts to motivate households to insulate their attics: no effect
    • Increase in households that agreed to insulate their attics when offered loft-clearing services even though they had to pay for the service: 4.8 fold increase

Sources

New Programming Language Removes Human Error from Privacy Equation


MIT Technology Review: “Anytime you hear about Facebook inadvertently making your location public, or revealing who is stalking your profile, it’s likely because a programmer added code that inadvertently led to a bug.
But what if there was a system in place that could substantially reduce such privacy breaches and effectively remove human error from the equation?
One MIT PhD thinks she has the answer, and its name is Jeeves.
This past month, Jean Yang released an open-source Python version of “Jeeves,” a programming language with built-in privacy features that free programmers from having to provide on-the-go ad-hoc maintenance of privacy settings.
Given that somewhere between 10 and 20 percent of all code is related to privacy policy, Yang thinks that Jeeves will be an attractive option for social app developers who are looking to be more efficient in their use of programmer resources – as well as those who are hoping to assuage users’ privacy concerns about if and how they use your data.
For more information about Jeeves visit the project site.
For more information on Yang visit her CSAIL page.”

The FDA is making adverse event and recall data available to app developers


in FierceBioTechIT: “When Beth Noveck arrived at the White House she had a clear, albeit unusual, mission–to apply the transparency and collaboration of the open-source movement to government. Noveck has now left the White House, but the ideas she brought are still percolating through the governmental machine. In 2014, the thinking is set to lead to a new, more open FDA.
Regulatory Focus reports the agency has quietly created a website and initiative called openFDA. At this stage the project is still in the prelaunch phase, but the FDA has already given a teaser of its plans. When the program opens for beta access later this year, users will gain access to structured data sets as application programming interfaces (APIs) and raw downloads. The ultimate scope of the project is unclear, but for now the FDA is working on making three data sets available.
The three data sets will give users unprecedented access to FDA archives of adverse events, product recalls and label information. Together the three data sets represent a substantial slice of what many people want to know about the FDA. The adverse event database contains details of millions of side effects and medication errors, while the recall information the FDA is preparing to share gathers all the public notices of products withdrawn from the market.
Making the data available as an API–a way for machines to talk to each other–means third parties can use the information as the basis for apps. The experience of the National Aeronautics and Space Administration (NASA) gives some indication of what might happen once the FDA opens up its data. One year after making its data available as an API in 2011, NASA began holding an annual Space Apps Challenge. At the event, people create apps and APIs.
Some challenges have no obvious use for NASA, such as a project to make a 3D printed model of the dark side of the moon from NASA data. Others could clearly be the starting point for technology used by the space agency. In one challenge, teams were tasked with creating a miniaturized modular research satellite for use on Mars. NASA is working to the same White House digital playbook as the FDA. How the FDA interprets the broad goals in the drug regulation arena remains to be seen.
– read Regulatory Focusarticle
– here’s the openFDA page
– check out NASA’s challenges

What makes a good API?


Joshua Tauberer’s Blog: “There comes a time in every dataset’s life when it wants to become an API. That might be because of consumer demand or an executive order. How are you going to make a good one?…
Let’s take the common case where you have a relatively static, large dataset that you want to provide read-only access to. Here are 19 common attributes of good APIs for this situation. …
Granular Access. If the user wanted the whole thing they’d download it in bulk, so an API must be good at providing access to the most granular level practical for data users (h/t Ben Balter for the wording on that). When the data comes from a table, this usually means the ability to read a small slice of it using filters, sorting, and paging (limit/offset), the ability to get a single row by identifying it with a persistent, unique identifier (usually a numeric ID), and the ability to select just which fields should be included in the result output (good for optimizing bandwidth in mobile apps, h/t Eric Mill). (But see “intents” below.)
Deep Filtering. An API should be good at needle-in-haystack problems. Full text search is hard to do, so an API that can do it relieves a big burden for developers — if your API has any big text fields. Filters that can span relations or cross tables (i.e. joins) can be very helpful as well. But don’t go overboard. (Again, see “intents” below.)
Typed Values. Response data should be typed. That means that whether a field’s value is an integer, text, list, floating-point number, dictionary, null, or date should be encoded as a part of the value itself. JSON and XML with XSD are good at this. CSV and plain XML, on the other hand, are totally untyped. Types must be strictly enforced. Columns must choose a data type and stick with it, no exceptions. When encoding other sorts of data as text, the values must all absolutely be valid according to the most narrow regular expression that you can make. Provide that regular expression to the API users in documentation.
Normalize Tables, Then Denormalize. Normalization is the process of removing redundancy from tables by making multiple tables. You should do that. Have lots of primary keys that link related tables together. But… then… denormalize. The bottleneck of most APIs isn’t disk space but speed. Queries over denormalized tables are much faster than writing queries with JOINs over multiple tables. It’s faster to get data if it’s all in one response than if the user has to issue multiple API calls (across multiple tables) to get it. You still have to normalize first, though. Denormalized data is hard to understand and hard to maintain.
Be RESTful, And More. ”REST” is a set of practices. There are whole books on this. Here it is in short. Every object named in the data (often that’s the rows of the table) gets its own URL. Hierarchical relationships in the data are turned into nice URL paths with slashes. Put the URLs of related resources in output too (HATEOAS, h/t Ed Summers). Use HTTP GET and normal query string processing (a=x&b=y) for filtering, sorting, and paging. The idea of REST is that these are patterns already familiar to developers, and reusing existing patterns — rather than making up entirely new ones — makes the API more understandable and reusable. Also, use HTTPS for everything (h/t Eric Mill), and provide the API’s status as an API itself possibly at the root URL of the API’s URL space (h/t Eric Mill again).
….
Never Require Registration. Don’t have authentication on your API to keep people out! In fact, having a requirement of registration may contradict other guidelines (such as the 8 Principles of Open Government Data). If you do use an API key, make it optional. A non-authenticated tier lets developers quickly test the waters, and that is really important for getting developers in the door, and, again, it may be important for policy reasons as well. You can have a carrot to incentivize voluntary authentication: raise the rate limit for authenticated queries, for instance. (h/t Ben Balter)
Interactive Documentation. An API explorer is a web page that users can visit to learn how to build API queries and see results for test queries in real time. It’s an interactive browser tool, like interactive documentation. Relatedly, an “explain mode” in queries, which instead of returning results says what the query was and how it would be processed, can help developers understand how to use the API (h/t Eric Mill).
Developer Community. Life is hard. Coding is hard. The subject matter your data is about is probably very complex. Don’t make your API users wade into your API alone. Bring the users together, bring them to you, and sometimes go to them. Let them ask questions and report issues in a public place (such as github). You may find that users will answer other users’ questions. Wouldn’t that be great? Have a mailing list for longer questions and discussion about the future of the API. Gather case studies of how people are using the API and show them off to the other users. It’s not a requirement that the API owner participates heavily in the developer community — just having a hub is very helpful — but of course the more participation the better.
Create Virtuous Cycles. Create an environment around the API that make the data and API stronger. For instance, other individuals within your organization who need the data should go through the public API to the greatest extent possible. Those users are experts and will help you make a better API, once they realize they benefit from it too. Create a feedback loop around the data, meaning find a way for API users to submit reports of data errors and have a process to carry out data updates, if applicable and possible. Do this in the public as much as possible so that others see they can also join the virtuous cycle.”

Tim Berners-Lee: we need to re-decentralise the web


Wired:  “Twenty-five years on from the web’s inception, its creator has urged the public to re-engage with its original design: a decentralised internet that at its very core, remains open to all.
Speaking with Wired editor David Rowan at an event launching the magazine’s March issue, Tim Berners-Lee said that although part of this is about keeping an eye on for-profit internet monopolies such as search engines and social networks, the greatest danger is the emergence of a balkanised web.
“I want a web that’s open, works internationally, works as well as possible and is not nation-based,” Berners-Lee told the audience… “What I don’t want is a web where the  Brazilian government has every social network’s data stored on servers on Brazilian soil. That would make it so difficult to set one up.”
It’s the role of governments, startups and journalists to keep that conversation at the fore, he added, because the pace of change is not slowing — it’s going faster than ever before. For his part Berners-Lee drives the issue through his work at the Open Data Institute, World Wide Web Consortium and World Wide Web Foundation, but also as an MIT professor whose students are “building new architectures for the web where it’s decentralised”. On the issue of monopolies, Berners-Lee did say it’s concerning to be “reliant on big companies, and one big server”, something that stalls innovation, but that competition has historically resolved these issues and will continue to do so.
The kind of balkanised web he spoke about, as typified by Brazil’s home-soil servers argument or Iran’s emerging intranet, is partially being driven by revelations of NSA and GCHQ mass surveillance. The distrust that it has brewed, from a political level right down to the threat of self-censorship among ordinary citizens, threatens an open web and is, said Berners-Lee,  a greater threat than censorship. Knowing the NSA  may be breaking commercial encryption services could result in the emergence of more networks like China’s Great Firewall, to “protect” citizens. This is why we need a bit of anti-establishment push back, alluded to by Berners-Lee.”

From Crowds to Collaborators: Initiating Effort and Catalyzing Interactions Among Online Creative Workers


Harvard Business School Paper by Kevin J. Boudreau, Patrick Gaule, Karim R. Lakhani, Christoph Riedl, and Anita Williams Woolley: “Online “organizations” are becoming a major engine for knowledge development in a variety of domains such as Wikipedia and open source software development. Many online platforms involve collaboration and coordination among members to reach common goals. In this sense, they are collaborative communities. This paper asks: What factors most inspire online teams to begin to collaborate and to do so creatively and effectively? The authors analyze a data set of 260 individuals randomly assigned to 52 teams tasked with developing working solutions to a complex innovation problem over 10 days, with varying cash incentives. Findings showed that although cash incentives stimulated a significant boost of effort per se, cash incentives did not transform the nature of the work process or affect the level of collaboration. In addition, at a basic yet striking level, the likelihood that an individual chooses to participate depended on whether teammates were themselves active. Moreover, communications among teammates led to more communications, and communications among teammates also stimulated greater continuous levels of effort. Overall, the study sheds light on how perspectives on incentives, predominant in economics, and perspectives on social processes and interactions, predominant in research on organizational behavior and teams, can be better understood. Key concepts include:

  • An individual’s likelihood of being active in online collaboration increases by about 41 percent with each additional active teammate.
  • Management could provide communications channels to make the efforts of other members more visible. This is important in the design of systems for online work as it helps members to confirm that others are actively contributing.

Full Working Paper Text
 

AskThem.io – Questions-and-Answers with Every Elected Official


Press Release: “AskThem.io, launching Feb. 10th, is a free & open-source website for questions-and-answers with public figures. AskThem is like a version of the White House’s “We The People” petition platform, where over 8 million people have taken action to support questions for a public response – but for the first time, for every elected official nationwide…AskThem.io has official government data for over 142,000 U.S. elected officials at every level of government: federal, state, county, and municipal. Also, AskThem allows anyone to ask a question to any verified Twitter account, for online dialogue with public figures.

Here’s how AskThem works for online public dialogue:

  • For the first time in an open-source website, visitors enter their street address to see all their elected officials, from federal down to the city levels, or search for a verified Twitter account.
  • Individuals & organizations submit a question to their elected officials – for example, asking a city council member about a proposed ban on plastic bags.
  • People then sign on to the questions and petitions they support, voting them up on AskThem and sharing them over social media, as with online petitions.
  • When a question passes a certain threshold of signatures, AskThem delivers it to the recipient over email & social media and encourages a public response – creating a continual, structured dialogue with elected officials at every level of government.

AskThem also incorporates open government data, such as city council agendas and key vote information, to inform good questions of people in power. Open government advocate, Chicago, IL Clerk Susana Mendoza, joined AskThem because she believes that “technology should bring residents and the Office of the Chicago City Clerk closer together.”

Elected officials who sign up with AskThem agree to respond to the most popular questions from their constituents (about two per month). Interested elected officials can sign up now to become verified, free & open to everyone.

Issue-based organizations can use question & petition info from AskThem to surface political issues in their area that people care about, stay continuously engaged with government, and promote public accountability. Participating groups on AskThem include the internet freedom non-profit Fight For the Future, the social media crowd-speaking platform Thunderclap.it, the Roosevelt Institute National Student Network, and more.”

DARPA Open Catalog Makes Agency-Sponsored Software and Publications Available to All


Press Release: “Public website aims to encourage communities interested in DARPA research to build off the agency’s work, starting with big data…
DARPA has invested in many programs that sponsor fundamental and applied research in areas of computer science, which have led to new advances in theory as well as practical software. The R&D community has asked about the availability of results, and now DARPA has responded by creating the DARPA Open Catalog, a place for organizing and sharing those results in the form of software, publications, data and experimental details. The Catalog can be found at http://go.usa.gov/BDhY.
Many DoD and government research efforts and software procurements contain publicly releasable elements, including open source software. The nature of open source software lends itself to collaboration where communities of developers augment initial products, build on each other’s expertise, enable transparency for performance evaluation, and identify software vulnerabilities. DARPA has an open source strategy for areas of work including big data to help increase the impact of government investments in building a flexible technology base.
“Making our open source catalog available increases the number of experts who can help quickly develop relevant software for the government,” said Chris White, DARPA program manager. “Our hope is that the computer science community will test and evaluate elements of our software and afterward adopt them as either standalone offerings or as components of their products.”

Citizen Engagement: 3 Cities And Their Civic Tech Tools


Melissa Jun Rowley at the Toolbox: “Though democratic governments are of the people, by the people, and for the people, it often seems that our only input is electing officials who pass laws on our behalf. After all, I don’t know many people who attend town hall meetings these days. But the evolution of technology has given citizens a new way to participate. Governments are using technology to include as many voices from their communities as possible in civic decisions and activities. Here are three examples.
Raleigh, NC
Raleigh North Carolina’s open government initiative is a great example of passive citizen engagement. By following an open source strategy, Open Raleigh has made city data available to the public. Citizens then use the data in a myriad of ways, from simply visualizing daily crime in their city, to creating an app that lets users navigate and interactively utilize the city’s greenway system.
Fort Smith, AR
Using MindMixer, Fort Smith Arkansas has created an online forum for residents to discuss the city’s comprehensive plan, effectively putting the community’s future in the hands of the community itself. Citizens are invited to share their own ideas, vote on ideas submitted by others, and engage with city officials that are “listening” to the conversation on the site.
Seattle, WA
Being a tech town, it’s no surprise that Seattle is using social media as a citizen engagement tool. The Seattle Police Department (SPD) uses a variety of social media tools to reach the public. In 2012, the department launched a first-of-its kind hyper-local twitter initiative. A police scanner for the twitter generation, Tweets by Beat provides twitter feeds of police dispatches in each of Seattle’s 51 police beats so that residents can find out what is happening right on their block.
In addition to Twitter and Facebook, SPD created a Tumblr to, in their own words, “show you your police department doing police-y things in your city.” In a nutshell, the department’s Tumblr serves as an extension of their other social media outlets. “

"Natural Cities" Emerge from Social Media Location Data


Emerging Technology From the arXiv: “Nobody agrees on how to define a city. But the emergence of “natural cities” from social media data sets may change that, say computational geographers…
A city is a large, permanent human settlement. But try and define it more carefully and you’ll soon run into trouble. A settlement that qualifies as a city in Sweden may not qualify in China, for example. And the reasons why one settlement is classified as a town while another as a city can sometimes seem almost arbitrary.
City planners know this problem well.  They tend to define cities by administrative, legal or even historical boundaries that have little logic to them. Indeed, the same city can sometimes be defined in various different ways.
That causes all kinds of problems from counting the total population to working out who pays for the upkeep of the place.  Which definition do you use?
Now help may be at hand thanks to the work of Bin Jiang and Yufan Miao at the University of Gävle in Sweden. These guys have found a way to use people’s location recorded by social media to define the boundaries of so-called natural cities which have a close resemblance to real cities in the US.
Jiang and Miao began with a dataset from the Brightkite social network, which was active between 2008 and 2010. The site encouraged users to log in with their location details so that they could see other users nearby. So the dataset consists of almost 3 million locations in the US and the dates on which they were logged.
To start off, Jiang and Miao simply placed a dot on a map at the location of each login. They then connected these dots to their neighbours to form triangles that end up covering the entire mainland US.
Next, they calculated the size of each triangle on the map and plotted this size distribution, which turns out to follow a power law. So there are lots of tiny triangles but only a few  large ones.
Finally, the calculated the average size of the triangles and then coloured in all those that were smaller than average. The coloured areas are “natural cities”, say Jiang and Miao.
It’s easy to imagine that resulting map of triangles is of little value.  But to the evident surprise of ther esearchers, it produces a pretty good approximation of the cities in the US. “We know little about why the procedure works so well but the resulting patterns suggest that the natural cities effectively capture the evolution of real cities,” they say.
That’s handy because it suddenly gives city planners a way to study and compare cities on a level playing field. It allows them to see how cities evolve and change over time too. And it gives them a way to analyse how cities in different parts of the world differ.
Of course, Jiang and Miao will want to find out why this approach reveals city structures in this way. That’s still something of a puzzle but the answer itself may provide an important insight into the nature of cities (or at least into the nature of this dataset).
A few days ago, this blog wrote about how a new science of cities is emerging from the analysis of big data.  This is another example and expect to see more.
Ref:  http://arxiv.org/abs/1401.6756 : The Evolution of Natural Cities from the Perspective of Location-Based Social Media”