Handbook of Digital Politics


Book edited by Stephen Coleman: “Politics continues to evolve in the digital era, spurred in part by the accelerating pace of technological development. This cutting-edge Handbook includes the very latest research on the relationship between digital information, communication technologies and politics.

Written by leading scholars in the field, the chapters explore in seven parts: theories of digital politics, government and policy, collective action and civic engagement, political talk, journalism, internet governance and new frontiers in digital politics research. The contributors focus on the politics behind the implementation of digital technologies in society today.

All students in the fields of politics, media and communication studies, journalism, science and sociology will find this book to be a useful resource in their studies. Political practitioners seeking digital strategies, as well as web and other digital practitioners wanting to know more about political applications for their work will also find this book to be of interest….(More)”

The Crowdsourcing Site That Wants to Pool Our Genomes


Ed Jong at the Atlantic: “…In 2010, I posted a vial of my finest spit to the genetic-testing company 23andme. In return, I got to see what my genes reveal about my ancestry, how they affect my risk of diseases or my responses to medical drugs, and even what they say about the texture of my earwax. (It’s dry.) 23andme now has around a million users, as do other similar companies like Ancestry.com.

But these communities are largely separated from one another, a situation that frustrated Yaniv Erlich from the New York Genome Center and Columbia University. “Tens of millions of people will soon have access to their genomes,” he says. “Are we just going to let these data sit in silos, or can we partner with these large communities to enable some really large science? That’s why we developed DNA.LAND.”

DNA.LAND, which Erlich developed together with colleague Joe Pickrell, is a website that allows customers of other genetic-testing services to upload files containing their genetic data. Scientists can then use this data for research, to the extent that each user consents to. “DNA.LAND is a way for getting the general public to participate in large-scale genetic studies,” says Erlich. “And we’re not a company. We’re a non-profit website, run by scientists.”…(More)”

Data Science of the People, for the People, by the People: A Viewpoint on an Emerging Dichotomy


Paper by Kush R. Varshney: “This paper presents a viewpoint on an emerging dichotomy in data science: applications in which predictions of datadriven algorithms are used to support people in making consequential decisions that can have a profound effect on other people’s lives and applications in which data-driven algorithms act autonomously in settings of low consequence and large scale. An example of the first type of application is prison sentencing and of the second type is selecting news stories to appear on a person’s web portal home page. It is argued that the two types of applications require data, algorithms and models with vastly different properties along several dimensions, including privacy, equitability, robustness, interpretability, causality, and openness. Furthermore, it is argued that the second type of application cannot always be used as a surrogate to develop methods for the first type of application. To contribute to the development of methods for the first type of application, one must really be working on the first type of application….(More)”

Crowdsourced research: Many hands make tight work


 

Raphael Silberzahn & Eric L. Uhlmann in Nature: “…For many research problems, crowdsourcing analyses will not be the optimal solution. It demands a huge amount of resources for just one research question. Some questions will not benefit from a crowd of analysts: researchers’ approaches will be much more similar for simple data sets and research designs than for large and complex ones. Importantly, crowdsourcing does not eliminate all bias. Decisions must still be made about what hypotheses to test, from where to get suitable data, and importantly, which variables can or cannot be collected. (For instance, we did not consider whether a particular player’s skin tone was lighter or darker than that of most of the other players on his team.) Finally, researchers may continue to disagree about findings, which makes it challenging to present a manuscript with a clear conclusion. It can also be puzzling: the investment of more resources can lead to less-clear outcomes.

“Under the current system, strong storylines win out over messy results.”

Still, the effort can be well worth it. Crowdsourcing research can reveal how conclusions are contingent on analytical choices. Furthermore, the crowdsourcing framework also provides researchers with a safe space in which they can vet analytical approaches, explore doubts and get a second, third or fourth opinion. Discussions about analytical approaches happen before committing to a particular strategy. In our project, the teams were essentially peer reviewing each other’s work before even settling on their own analyses. And we found that researchers did change their minds through the course of analysis.

Crowdsourcing also reduces the incentive for flashy results. A single-team project may be published only if it finds significant effects; participants in crowdsourced projects can contribute even with null findings. A range of scientific possibilities are revealed, the results are more credible and analytical choices that seem to sway conclusions can point research in fruitful directions. What is more, analysts learn from each other, and the creativity required to construct analytical methodologies can be better appreciated by the research community and the public.

Of course, researchers who painstakingly collect a data set may not want to share it with others. But greater certainty comes from having an independent check. A coordinated effort boosts incentives for multiple analyses and perspectives in a way that simply making data available post-publication does not.

The transparency resulting from a crowdsourced approach should be particularly beneficial when important policy issues are at stake. The uncertainty of scientific conclusions about, for example, the effects of the minimum wage on unemployment, and the consequences of economic austerity policies should be investigated by crowds of researchers rather than left to single teams of analysts.

Under the current system, strong storylines win out over messy results. Worse, once a finding has been published in a journal, it becomes difficult to challenge. Ideas become entrenched too quickly, and uprooting them is more disruptive than it ought to be. The crowdsourcing approach gives space to dissenting opinions.

Scientists around the world are hungry for more-reliable ways to discover knowledge and eager to forge new kinds of collaborations to do so. Our first project had a budget of zero, and we attracted scores of fellow scientists with two tweets and a Facebook post.

Researchers who are interested in starting or participating in collaborative crowdsourcing projects can access resources available online. We have publicly shared all our materials and survey templates, and the Center for Open Science has just launched ManyLab, a web space where researchers can join crowdsourced projects….(More).

See also Nature special collection:reproducibility

 

The deception that lurks in our data-driven world


Alexis C. Madrigal at Fusion: “…There’s this amazing book called Seeing Like a State, which shows how governments and other big institutions try to reduce the vast complexity of the world into a series of statistics that their leaders use to try to comprehend what’s happening.

The author, James C. Scott, opens the book with an extended anecdote about the Normalbaum. In the second half of the 18th century, Prussian rulers wanted to know how many “natural resources” they had in the tangled woods of the country. So, they started counting. And they came up with these huge tables that would let them calculate how many board-feet of wood they could pull from a given plot of forest. All the rest of the forest, everything it did for the people and the animals and general ecology of the place was discarded from the analysis.

The world proved too unruly. Their data wasn’t perfect.

But the world proved too unruly. Their data wasn’t perfect. So they started creating new forests, the Normalbaum, planting all the trees at the same time, and monoculturing them so that there were no trees in the forest that couldn’t be monetized for wood. “The fact is that forest science and geometry, backed by state power, had the capacity to transform the real, diverse, and chaotic old-growth forest into a new, more uniform forest that closely resembled the administrative grid of its techniques,” Scott wrote.

normal forrest plan

The spreadsheet became the world! They even planted the trees in rows, like a grid.

German foresters got very scientific with their fertilizer applications and management practices. And the scheme really worked—at least for a hundred years. Pretty much everyone across the world adopted their methods.

Then the forests started dying.

“In the German case, the negative biological and ultimately commercial consequences of the stripped-down forest became painfully obvious only after the second rotation of conifers had been planted,” Scott wrote.

The complex ecosystem that underpinned the growth of these trees through generations—all the microbial and inter-species relationships—were torn apart by the rigor of the Normalbaum. The nutrient cycles were broken. Resilience was lost. The hidden underpinnings of the world were revealed only when they were gone. The Germans, like they do, came up with a new word for what happened: Waldsterben, or forest death.

The hidden underpinnings of the world were revealed only when they were gone.

Sometimes, when I look out at our world—at the highest level—in which thin data have come to stand in for huge complex systems of human and biological relationships, I wonder if we’re currently deep in the Normalbaum phase of things, awaiting the moment when Waldsterbensets in.

Take the ad-supported digital media ecosystem. The idea is brilliant: capture data on people all over the web and then use what you know to show them relevant ads, ads they want to see. Not only that, but because it’s all tracked, unlike broadcast or print media, an advertiser can measure what they’re getting more precisely. And certainly the digital advertising market has grown, taking share from most other forms of media. The spreadsheet makes a ton of sense—which is one reason for the growth predictions that underpin the massive valuations of new media companies.

But scratch the surface, like Businessweek recently did, and the problems are obvious. A large percentage of the traffic to many stories and videos consists of software pretending to be human.

“The art is making the fake traffic look real, often by sprucing up websites with just enough content to make them appear authentic,” Businessweek says. “Programmatic ad-buying systems don’t necessarily differentiate between real users and bots, or between websites with fresh, original work, and Potemkin sites camouflaged with stock photos and cut-and-paste articles.”

Of course, that’s not what high-end media players are doing. But the cheap programmatic ads, fueled by fake traffic, drive down the pricesacross the digital media industry, making it harder to support good journalism. Meanwhile, users of many sites are rebelling against the business model by installing ad blockers.

The advertisers and ad-tech firms just wanted to capture user data to show them relevant ads. They just wanted to measure their ads more effectively. But placed into the real-world, the system that grew up around these desires has reshaped the media landscape in unpredictable ways.

We’ve deceived ourselves into thinking data is a camera, but it’s really an engine. Capturing data about something changes the way that something works. Even the mere collection of stats is not a neutral act, but a way of reshaping the thing itself….(More)”

Five principles for applying data science for social good


Jake Porway at O’Reilly: “….Every week, a data or technology company declares that it wants to “do good” and there are countless workshops hosted by major foundations musing on what “big data can do for society.” Add to that a growing number of data-for-good programs from Data Science for Social Good’s fantastic summer program toBayes Impact’s data science fellowships to DrivenData’s data-science-for-good competitions, and you can see how quickly this idea of “data for good” is growing.

Yes, it’s an exciting time to be exploring the ways new datasets, new techniques, and new scientists could be deployed to “make the world a better place.” We’ve already seen deep learning applied to ocean health,satellite imagery used to estimate poverty levels, and cellphone data used to elucidate Nairobi’s hidden public transportation routes. And yet, for all this excitement about the potential of this “data for good movement,” we are still desperately far from creating lasting impact. Many efforts will not only fall short of lasting impact — they will make no change at all….

So how can these well-intentioned efforts reach their full potential for real impact? Embracing the following five principles can drastically accelerate a world in which we truly use data to serve humanity.

1. “Statistics” is so much more than “percentages”

We must convey what constitutes data, what it can be used for, and why it’s valuable.

There was a packed house for the March 2015 release of the No Ceilings Full Participation Report. Hillary Clinton, Melinda Gates, and Chelsea Clinton stood on stage and lauded the report, the culmination of a year-long effort to aggregate and analyze new and existing global data, as the biggest, most comprehensive data collection effort about women and gender ever attempted. One of the most trumpeted parts of the effort was the release of the data in an open and easily accessible way.

I ran home and excitedly pulled up the data from the No Ceilings GitHub, giddy to use it for our DataKind projects. As I downloaded each file, my heart sunk. The 6MB size of the entire global dataset told me what I would find inside before I even opened the first file. Like a familiar ache, the first row of the spreadsheet said it all: “USA, 2009, 84.4%.”

What I’d encountered was a common situation when it comes to data in the social sector: the prevalence of inert, aggregate data. ….

2. Finding problems can be harder than finding solutions

We must scale the process of problem discovery through deeper collaboration between the problem holders, the data holders, and the skills holders.

In the immortal words of Henry Ford, “If I’d asked people what they wanted, they would have said a faster horse.” Right now, the field of data science is in a similar position. Framing data solutions for organizations that don’t realize how much is now possible can be a frustrating search for faster horses. If data cleaning is 80% of the hard work in data science, then problem discovery makes up nearly the remaining 20% when doing data science for good.

The plague here is one of education. …

3. Communication is more important than technology

We must foster environments in which people can speak openly, honestly, and without judgment. We must be constantly curious about each other.

At the conclusion of one of our recent DataKind events, one of our partner nonprofit organizations lined up to hear the results from their volunteer team of data scientists. Everyone was all smiles — the nonprofit leaders had loved the project experience, the data scientists were excited with their results. The presentations began. “We used Amazon RedShift to store the data, which allowed us to quickly build a multinomial regression. The p-value of 0.002 shows …” Eyes glazed over. The nonprofit leaders furrowed their brows in telegraphed concentration. The jargon was standing in the way of understanding the true utility of the project’s findings. It was clear that, like so many other well-intentioned efforts, the project was at risk of gathering dust on a shelf if the team of volunteers couldn’t help the organization understand what they had learned and how it could be integrated into the organization’s ongoing work…..

4. We need diverse viewpoints

To tackle sector-wide challenges, we need a range of voices involved.

One of the most challenging aspects to making change at the sector level is the range of diverse viewpoints necessary to understand a problem in its entirety. In the business world, profit, revenue, or output can be valid metrics of success. Rarely, if ever, are metrics for social change so cleanly defined….

Challenging this paradigm requires diverse, or “collective impact,” approaches to problem solving. The idea has been around for a while (h/t Chris Diehl), but has not yet been widely implemented due to the challenges in successful collective impact. Moreover, while there are many diverse collectives committed to social change, few have the voice of expert data scientists involved. DataKind is piloting a collective impact model called DataKind Labs, that seeks to bring together diverse problem holders, data holders, and data science experts to co-create solutions that can be applied across an entire sector-wide challenge. We just launchedour first project with Microsoft to increase traffic safety and are hopeful that this effort will demonstrate how vital a role data science can play in a collective impact approach.

5. We must design for people

Data is not truth, and tech is not an answer in-and-of-itself. Without designing for the humans on the other end, our work is in vain.

So many of the data projects making headlines — a new app for finding public services, a new probabilistic model for predicting weather patterns for subsistence farmers, a visualization of government spending — are great and interesting accomplishments, but don’t seem to have an end user in mind. The current approach appears to be “get the tech geeks to hack on this problem, and we’ll have cool new solutions!” I’ve opined that, though there are many benefits to hackathons, you can’t just hack your way to social change….(More)”

Accelerating Citizen Science and Crowdsourcing to Address Societal and Scientific Challenges


Tom Kalil et al at the White House Blog: “Citizen science encourages members of the public to voluntarily participate in the scientific process. Whether by asking questions, making observations, conducting experiments, collecting data, or developing low-cost technologies and open-source code, members of the public can help advance scientific knowledge and benefit society.

Through crowdsourcing – an open call for voluntary assistance from a large group of individuals – Americans can study and tackle complex challenges by conducting research at large geographic scales and over long periods of time in ways that professional scientists working alone cannot easily duplicate. These challenges include understanding the structure of proteins related viruses in order to support development of new medications, or preparing for, responding to, and recovering from disasters.

…OSTP is today announcing two new actions that the Administration is taking to encourage and support the appropriate use of citizen science and crowdsourcing at Federal agencies:

  1. OSTP Director John Holdren, is issuing a memorandum entitled Addressing Societal and Scientific Challenges through Citizen Science and Crowdsourcing. This memo articulates principles that Federal agencies should embrace to derive the greatest value and impact from citizen science and crowdsourcing projects. The memo also directs agencies to take specific actions to advance citizen science and crowdsourcing, including designating an agency-specific coordinator for citizen science and crowdsourcing projects, and cataloguing citizen science and crowdsourcing projects that are open for public participation on a new, centralized website to be created by the General Services Administration: making it easy for people to find out about and join in these projects.
  2. Fulfilling a commitment made in the 2013 Open Government National Action Plan, the U.S. government is releasing the first-ever Federal Crowdsourcing and Citizen Science Toolkit to help Federal agencies design, carry out, and manage citizen science and crowdsourcing projects. The toolkit, which was developed by OSTP in partnership with the Federal Community of Practice for Crowdsourcing and Citizen Science and GSA’s Open Opportunities Program, reflects the input of more than 125 Federal employees from over 25 agencies on ideas, case studies, best management practices, and other lessons to facilitate the successful use of citizen science and crowdsourcing in a Federal context….(More)”

 

Researchers wrestle with a privacy problem


Erika Check Hayden at Nature: “The data contained in tax returns, health and welfare records could be a gold mine for scientists — but only if they can protect people’s identities….In 2011, six US economists tackled a question at the heart of education policy: how much does great teaching help children in the long run?

They started with the records of more than 11,500 Tennessee schoolchildren who, as part of an experiment in the 1980s, had been randomly assigned to high- and average-quality teachers between the ages of five and eight. Then they gauged the children’s earnings as adults from federal tax returns filed in the 2000s. The analysis showed that the benefits of a good early education last for decades: each year of better teaching in childhood boosted an individual’s annual earnings by some 3.5% on average. Other data showed the same individuals besting their peers on measures such as university attendance, retirement savings, marriage rates and home ownership.

The economists’ work was widely hailed in education-policy circles, and US President Barack Obama cited it in his 2012 State of the Union address when he called for more investment in teacher training.

But for many social scientists, the most impressive thing was that the authors had been able to examine US federal tax returns: a closely guarded data set that was then available to researchers only with tight restrictions. This has made the study an emblem for both the challenges and the enormous potential power of ‘administrative data’ — information collected during routine provision of services, including tax returns, records of welfare benefits, data on visits to doctors and hospitals, and criminal records. Unlike Internet searches, social-media posts and the rest of the digital trails that people establish in their daily lives, administrative data cover entire populations with minimal self-selection effects: in the US census, for example, everyone sampled is required by law to respond and tell the truth.

This puts administrative data sets at the frontier of social science, says John Friedman, an economist at Brown University in Providence, Rhode Island, and one of the lead authors of the education study “They allow researchers to not just get at old questions in a new way,” he says, “but to come at problems that were completely impossible before.”….

But there is also concern that the rush to use these data could pose new threats to citizens’ privacy. “The types of protections that we’re used to thinking about have been based on the twin pillars of anonymity and informed consent, and neither of those hold in this new world,” says Julia Lane, an economist at New York University. In 2013, for instance, researchers showed that they could uncover the identities of supposedly anonymous participants in a genetic study simply by cross-referencing their data with publicly available genealogical information.

Many people are looking for ways to address these concerns without inhibiting research. Suggested solutions include policy measures, such as an international code of conduct for data privacy, and technical methods that allow the use of the data while protecting privacy. Crucially, notes Lane, although preserving privacy sometimes complicates researchers’ lives, it is necessary to uphold the public trust that makes the work possible.

“Difficulty in access is a feature, not a bug,” she says. “It should be hard to get access to data, but it’s very important that such access be made possible.” Many nations collect administrative data on a massive scale, but only a few, notably in northern Europe, have so far made it easy for researchers to use those data.

In Denmark, for instance, every newborn child is assigned a unique identification number that tracks his or her lifelong interactions with the country’s free health-care system and almost every other government service. In 2002, researchers used data gathered through this identification system to retrospectively analyse the vaccination and health status of almost every child born in the country from 1991 to 1998 — 537,000 in all. At the time, it was the largest study ever to disprove the now-debunked link between measles vaccination and autism.

Other countries have begun to catch up. In 2012, for instance, Britain launched the unified UK Data Service to facilitate research access to data from the country’s census and other surveys. A year later, the service added a new Administrative Data Research Network, which has centres in England, Scotland, Northern Ireland and Wales to provide secure environments for researchers to access anonymized administrative data.

In the United States, the Census Bureau has been expanding its network of Research Data Centers, which currently includes 19 sites around the country at which researchers with the appropriate permissions can access confidential data from the bureau itself, as well as from other agencies. “We’re trying to explore all the available ways that we can expand access to these rich data sets,” says Ron Jarmin, the bureau’s assistant director for research and methodology.

In January, a group of federal agencies, foundations and universities created the Institute for Research on Innovation and Science at the University of Michigan in Ann Arbor to combine university and government data and measure the impact of research spending on economic outcomes. And in July, the US House of Representatives passed a bipartisan bill to study whether the federal government should provide a central clearing house of statistical administrative data.

Yet vast swathes of administrative data are still inaccessible, says George Alter, director of the Inter-university Consortium for Political and Social Research based at the University of Michigan, which serves as a data repository for approximately 760 institutions. “Health systems, social-welfare systems, financial transactions, business records — those things are just not available in most cases because of privacy concerns,” says Alter. “This is a big drag on research.”…

Many researchers argue, however, that there are legitimate scientific uses for such data. Jarmin says that the Census Bureau is exploring the use of data from credit-card companies to monitor economic activity. And researchers funded by the US National Science Foundation are studying how to use public Twitter posts to keep track of trends in phenomena such as unemployment.

 

….Computer scientists and cryptographers are experimenting with technological solutions. One, called differential privacy, adds a small amount of distortion to a data set, so that querying the data gives a roughly accurate result without revealing the identity of the individuals involved. The US Census Bureau uses this approach for its OnTheMap project, which tracks workers’ daily commutes. ….In any case, although synthetic data potentially solve the privacy problem, there are some research applications that cannot tolerate any noise in the data. A good example is the work showing the effect of neighbourhood on earning potential3, which was carried out by Raj Chetty, an economist at Harvard University in Cambridge, Massachusetts. Chetty needed to track specific individuals to show that the areas in which children live their early lives correlate with their ability to earn more or less than their parents. In subsequent studies5, Chetty and his colleagues showed that moving children from resource-poor to resource-rich neighbourhoods can boost their earnings in adulthood, proving a causal link.

Secure multiparty computation is a technique that attempts to address this issue by allowing multiple data holders to analyse parts of the total data set, without revealing the underlying data to each other. Only the results of the analyses are shared….(More)”

Open Science Revolution – New Ways of Publishing Research in The Digital Age


Scicasts: “A massive increase in the power of digital technology over the past decade allows us today to publish any article, blog post or tweet in a matter of seconds.

Much of the information on the web is also free – newspapers are embracing open access to their articles and many websites are copyrighting their content under the Creative Commons licenses, most of which allow the re-use and sharing of the original work at no cost.

As opposed to this openness, science publishing is still lagging behind. Most of the scientific knowledge generated in the past two centuries is hidden behind a paywall, requiring an average reader to pay tens to hundreds of euros to access an original study report written by scientists.

Can we not do things differently?

An answer to this question led to the creation of a number of new concepts that emerged over the past few years. A range of innovative open online science platforms are now trying “to do things differently”, offering researchers alternative ways of publishing their discoveries, making the publishing process faster and more transparent.

Here is a handful of examples, implemented by three companies – a recently launched open access journal Research Ideas and Outcomes (RIO), an open publishing platform F1000Research from The Faculty of 1000 and a research and publishing network ScienceOpen. Each has something different to offer, yet all of them seem to agree that science research should be open and accessible to everyone.

New concept – publish all research outputs

While the two-centuries-old tradition of science publishing lives and dies on exposing only the final outcomes of a research project, the RIO journal suggests a different approach. If we can follow new stories online step by step as they unfold (something that journalists have figured out and use in live reporting), they say, why not apply similar principles to research projects?

“RIO is the first journal that aims at publishing the whole research cycle and definitely the first one, to my knowledge, that tries to do that across all science branches – all of humanities, social sciences, engineering and so on,” says a co-founder of the RIO journal, Prof. Lyubomir Penev, in an interview to Scicasts.

From the original project outline, to datasets, software and methodology, each part of the project can be published separately. “The writing platform ARPHA, which underpins RIO, handles the whole workflow – from the stage when you write the first letter, to the end,” explains Prof. Penev.

At an early stage, the writing process is closed from public view and researchers may invite their collaborators and peers to view their project, add data and contribute to its development. Scientists can choose to publish any part of their project as it progresses – they can submit to the open platform their research idea, hypothesis or a newly developed experimental protocol, alongside future datasets and whole final manuscripts.

Some intermediate research stages and preliminary results can also be submitted to the platform F1000Research, which developed their own online authoring tool F1000Workspace, similar to ARPHA….(More)”

The Curious Politics of the ‘Nudge’


How do we really feel about policy “nudges”?

Earlier this month, President Obama signed an executive order directing federal agencies to collaborate with the White House’s new Social and Behavioral Sciences Team to use insights from behavioral science research to better serve the American people. For instance, studies show that people are more likely to save for retirement when they are automatically enrolled into a 401(k) retirement saving plan that they can opt out of than when they must actively opt in. The idea behind Mr. Obama’s initiative is that such soft-touch interventions, or “nudges,” can facilitate better decisions without resorting to heavier-handed strategies like mandates, taxes and bans.

The response to the executive order has been generally positive, but some conservatives have been critical, characterizing it as an instance of government overreach. (“President Obama Orders Behavioral Experiments on American Public” ran a headline on the website The Daily Caller.) However, it is worth noting that when a similar “behavioral insights team” was founded by the conservative government of the British prime minister, David Cameron, it met resistance from the political left. (“Brits’ Minds Will Be Controlled Without Us Knowing It” ran a headline in The Guardian.)

Is it possible that partisans from both ends of the political spectrum conflate their feelings about a general-purpose policymethod (such as nudges) with their feelings about a specific policy goal (or about those who endorse that goal)? We think so. In a series of recent experiments that we conducted with Todd Rogers of the Harvard Kennedy School, we found evidence for a “partisan nudge bias.”…

we also found that when behavioral policy tools were described without mention of a specific policy application or sponsor, the bias disappeared. In this “blind taste test,” liberals and conservatives were roughly equally accepting of the use of policy nudges.

This last finding is good news, because scientifically grounded, empirically validated behavioral innovations can help policy makers improve government initiatives for the benefit of all Americans, regardless of their political inclinations. “(More)