Public Policies, Made to Fit People


Richard Thaler in the New York Times: “I HAVE written here before about the potential gains to government from involving social and behavioral scientists in designing public policies. My enthusiasm comes in part from my experiences as an academic adviser to the Behavioral Insights Team created in Britain by Prime Minister David Cameron.

Thus I was pleased to hear reports that the White House is building a similar initiative here in the United States. Maya Shankar, a cognitive scientist and senior policy adviser at the White House Office of Science and Technology Policy, is coordinating this cross-agency group, called the Social and Behavioral Science Team; it is part of a larger effort to use evidence and innovation to promote government performance and efficiency. I am among a number of academics who have shared ideas with the administration about how research findings in social and behavioral science can improve policy.

It makes sense for social scientists to become more involved in policy, because many of society’s most challenging problems are, in essence, behavioral. Using social scientists’ findings to create plausible interventions, then testing their efficacy with randomized controlled trials, can improve — and sometimes save — people’s lives, all while reducing the need for more government spending to fix problems later.

Here are three examples of social science issues that have attracted the team’s attention…
THE 30-MILLION-WORD GAP One of society’s thorniest problems is that children from poor families start school lagging badly behind their more affluent classmates in readiness. By the age of 3, children from affluent families have vocabularies that are roughly double those of children from poor families, according to research published in 1995….
DOMESTIC VIOLENCE The team will primarily lend support and expertise to federal agency initiatives. One example concerns the effort to reduce domestic violence, a problem for which there is no quick fix….
HEALTH COMPLIANCE One reason for high health care costs is that patients fail to follow their treatment regimen….”

Inside Noisebridge: San Francisco’s eclectic anarchist hackerspace


at Gigaom: “Since its formation in 2007, Noisebridge has grown from a few people meeting in coffee shops to an overflowing space on Mission Street where members can pursue projects that even the maddest scientist would approve of…. When Noisebridge opened the doors of its first hackerspace location in San Francisco’s Mission district in 2008, it had nothing but a large table and few chairs found on the street.
Today, it looks like a mad scientist has been methodically hoarding tools, inventions, art, supplies and a little bit of everything else for five years. The 350 people who come through Noisebridge each week have a habit of leaving a mark, whether by donating a tool or building something that other visitors add to bit by bit. Anyone can be a paid member or a free user of the space, and over the years they have built it into a place where you can code, sew, hack hardware, cook, build robots, woodwork, learn, teach and more.
The members really are mad scientists. Anything left out in the communal spaces is fair game to “hack into a giant robot,” according to co-founder Mitch Altman. Members once took a broken down wheelchair and turned it into a brainwave-controlled robot named M.C. Hawking. Another person made pants with a built-in keyboard. The Spacebridge group has sent high altitude balloons to near space, where they captured gorgeous videos of the Earth. And once a month, the Vegan Hackers teach their pupils how to make classic fare like sushi and dumplings out of vegan ingredients….”

Index: The Data Universe


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on the data universe and was originally published in 2013.

  • How much data exists in the digital universe as of 2012: 2.7 zetabytes*
  • Increase in the quantity of Internet data from 2005 to 2012: +1,696%
  • Percent of the world’s data created in the last two years: 90
  • Number of exabytes (=1 billion gigabytes) created every day in 2012: 2.5; that number doubles every month
  • Percent of the digital universe in 2005 created by the U.S. and western Europe vs. emerging markets: 48 vs. 20
  • Percent of the digital universe in 2012 created by emerging markets: 36
  • Percent of the digital universe in 2020 predicted to be created by China alone: 21
  • How much information in the digital universe is created and consumed by consumers (video, social media, photos, etc.) in 2012: 68%
  • Percent of which enterprises have liability or responsibility for (copyright, privacy, compliance with regulations, etc.): 80
  • Amount included in the Obama Administration’s 2-12 Big Data initiative: over $200 million
  • Amount the Department of Defense is investing annually on Big Data projects as of 2012: over $250 million
  • Data created per day in 2012: 2.5 quintillion bytes
  • How many terabytes* of data collected by the U.S. Library of Congress as of April 2011: 235
  • How many terabytes of data collected by Walmart per hour as of 2012: 2,560, or 2.5 petabytes*
  • Projected growth in global data generated per year, as of 2011: 40%
  • Number of IT jobs created globally by 2015 to support big data: 4.4 million (1.9 million in the U.S.)
  • Potential shortage of data scientists in the U.S. alone predicted for 2018: 140,000-190,000, in addition to 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions
  • Time needed to sequence the complete human genome (analyzing 3 billion base pairs) in 2003: ten years
  • Time needed in 2013: one week
  • The world’s annual effective capacity to exchange information through telecommunication networks in 1986, 2007, and (predicted) 2013: 281 petabytes, 65 exabytes, 667 exabytes
  • Projected amount of digital information created annually that will either live in or pass through the cloud: 1/3
  • Increase in data collection volume year-over-year in 2012: 400%
  • Increase in number of individual data collectors from 2011 to 2012: nearly double (over 300 data collection parties in 2012)

*1 zetabyte = 1 billion terabytes | 1 petabyte = 1,000 terabytes | 1 terabyte = 1,000 gigabytes | 1 gigabyte = 1 billion bytes

Sources

Is Online Transparency Just a Feel-Good Sham?


Billy House in the National Journal: “It drew more than a few laughs in Washington. Not long after the White House launched its We the People website in 2011, where citizens could write online petitions and get a response if they garnered enough signatures, someone called for construction of a Star Wars-style Death Star.
With laudable humor, the White House dispatched Paul Shawcross, chief of the Science and Space Branch of the Office of Management and Budget, to explain that the administration “does not support blowing up planets.”
The incident caused a few chuckles, but it also made a more serious point: Years after politicians and government officials began using Internet surveys and online outreach as tools to engage people, the results overall have been questionable….
But skepticism over the value of these programs—and their genuineness—remains strong. Peter Levine, a professor at Tufts University’s Jonathan M. Tisch College of Citizenship and Public Service, said programs like online petitioning and citizen cosponsoring do not necessarily produce a real, representative voice for the people.
It can be “pretty easy to overwhelm these efforts with deliberate strategic action,” he said, noting that similar petitioning efforts in the European Union often find marijuana legalization as the most popular measure.”

Civic Innovation Fellowships Go Global


Some thoughts from Panthea Lee from Reboot: “In recent years, civic innovation fellowships have shown great promise to improve the relationships between citizens and government. In the United States, Code for America and the Presidential Innovation Fellows have demonstrated the positive impact a small group of technologists can have working hand-in-hand with government. With the launch of Code for All, Code for Europe, Code4Kenya, and Code4Africa, among others, the model is going global.
But despite the increasing popularity of civic innovation fellowships, there are few templates for how a “Code for” program can be adapted to a different context. In the US, the success of Code for America has drawn from a wealth of tech talent eager to volunteer skills, public and private support, and the active participation of municipal governments. Elsewhere, new “Code for” programs are surely going to have to operate within a different set of capacities and constraints.”

White House Expands Guidance on Promoting Open Data


NextGov: “White House officials have announced expanded technical guidance to help agencies make more data accessible to the public in machine-readable formats.
Following up on President Obama’s May executive order linking the pursuit of open data to economic growth, innovation and government efficiency, two budget and science office spokesmen on Friday published a blog post highlighting new instructions and answers to frequently asked questions.
Nick Sinai, deputy chief technology officer at the Office of Science and Technology Policy, and Dominic Sale, supervisory policy analyst at the Office of Management and Budget, noted that the policy now in place means that all “newly generated government data will be required to be made available in open, machine-readable formats, greatly enhancing their accessibility and usefulness, while ensuring privacy and security.”

A collaborative way to get to the heart of 3D printing problems


PSFK: “Because most of us only see the finished product when it comes to 3D printing projects – it’s easy to forget that things can, and do, go wrong when it comes to this miracle technology.
3D printing is constantly evolving, reaching exciting new heights, and touching every industry you can think of – but all this progress has left a trail of mangled plastic, and a devastated machines in it’s wake.
The Art of 3D Print Failure is a Flickr group that aims to document this failure, because after all, mistakes are how we learn, and how we make sure the same thing doesn’t happen the next time around. It can also prevent mistakes from happening to those who are new to 3D printing, before they even make them!”

On our best behaviour


Paper by Hector J. Levesque: “The science of AI is concerned with the study of intelligent forms of behaviour in computational terms. But what does it tell us when a good semblance of a behaviour can be achieved using cheap tricks that seem to have little to do with what we intuitively imagine intelligence to be? Are these intuitions wrong, and is intelligence really just a bag of tricks? Or are the philosophers right, and is a behavioural understanding of intelligence simply too weak? I think both of these are wrong. I suggest in the context of question-answering that what matters when it comes to the science of AI is not a good semblance of intelligent behaviour at all, but the behaviour itself, what it depends on, and how it can be achieved. I go on to discuss two major hurdles that I believe will need to be cleared.”

Five myths about big data


Samuel Arbesman, senior scholar at the Ewing Marion Kauffman Foundation and the author of “The Half-Life of Facts” in the Washington Post: “Big data holds the promise of harnessing huge amounts of information to help us better understand the world. But when talking about big data, there’s a tendency to fall into hyperbole. It is what compels contrarians to write such tweets as “Big Data, n.: the belief that any sufficiently large pile of s— contains a pony.” Let’s deflate the hype.
1. “Big data” has a clear definition.
The term “big data” has been in circulation since at least the 1990s, when it is believed to have originated in Silicon Valley. IBM offers a seemingly simple definition: Big data is characterized by the four V’s of volume, variety, velocity and veracity. But the term is thrown around so often, in so many contexts — science, marketing, politics, sports — that its meaning has become vague and ambiguous….
2. Big data is new.
By many accounts, big data exploded onto the scene quite recently. “If wonks were fashionistas, big data would be this season’s hot new color,” a Reuters report quipped last year. In a May 2011 report, the McKinsey Global Institute declared big data “the next frontier for innovation, competition, and productivity.”
It’s true that today we can mine massive amounts of data — textual, social, scientific and otherwise — using complex algorithms and computer power. But big data has been around for a long time. It’s just that exhaustive datasets were more exhausting to compile and study in the days when “computer” meant a person who performed calculations….
3. Big data is revolutionary.
In their new book, “Big Data: A Revolution That Will Transform How We Live, Work, and Think,”Viktor Mayer-Schonberger and Kenneth Cukier compare “the current data deluge” to the transformation brought about by the Gutenberg printing press.
If you want more precise advertising directed toward you, then yes, big data is revolutionary. Generally, though, it’s likely to have a modest and gradual impact on our lives….
4. Bigger data is better.
In science, some admittedly mind-blowing big-data analyses are being done. In business, companies are being told to “embrace big data before your competitors do.” But big data is not automatically better.
Really big datasets can be a mess. Unless researchers and analysts can reduce the number of variables and make the data more manageable, they get quantity without a whole lot of quality. Give me some quality medium data over bad big data any day…
5. Big data means the end of scientific theories.
Chris Anderson argued in a 2008 Wired essay that big data renders the scientific method obsolete: Throw enough data at an advanced machine-learning technique, and all the correlations and relationships will simply jump out. We’ll understand everything.
But you can’t just go fishing for correlations and hope they will explain the world. If you’re not careful, you’ll end up with spurious correlations. Even more important, to contend with the “why” of things, we still need ideas, hypotheses and theories. If you don’t have good questions, your results can be silly and meaningless.
Having more data won’t substitute for thinking hard, recognizing anomalies and exploring deep truths.”

Improved Governance? Exploring the Results of Peru's Participatory Budgeting Process


Paper by Stephanie McNulty for the 2013 Annual Meeting of the American Political Science Association (Aug. 29-Sept. 1, 2013): “Can a nationally mandated participatory budget process change the nature of local governance? Passed in 2003 to mandate participatory budgeting in all districts and regions of Peru, Peru’s National PB Law has garnered international attention from proponents of participatory governance. However, to date, the results of the process have not been widely documented. Presenting data that have been gathered through fieldwork, online databases, and primary documents, this paper explores the results of Peru’s PB after ten years of implementation. The paper finds that results are limited. While there are a significant number of actors engaged in the process, the PB is still dominated by elite actors that do not represent the diversity of the civil society sector in Peru. Participants approve important “pro-poor” projects, but they are not always executed. Finally, two important indicators of governance, sub-national conflict and trust in local institutions, have not improved over time. Until Peruvian politicians make a concerted effort to move beyond politics as usual, results will continue to be limited”