House Bill Raises Questions about Crowdsourcing


Anne Bowser for Commons Lab (Wilson Center):”A new bill in the House is raising some key questions about how crowdsourcing is understood by scientists, government agencies, policymakers and the public at large.
Robin Bravender’s recent article in Environment & Energy Daily, “House Republicans Push Crowdsourcing on Agency Science,” (subscription required) neatly summarizes the debate around H.R. 4012, a bill introduced to the House of Representatives earlier this month. The House Science, Space and Technology Committe earlier this week held a hearing on the bill, which could see a committee vote as early as next month.
Dubbed the “Secret Science Reform Act of 2014,” the bill prohibits the Environmental Protection Agency (EPA) from “proposing, finalizing, or disseminating regulations or assessments based upon science that is not transparent or reproducible.” If the bill is passed, EPA would be unable to base assessments or regulations on any information not “publicly available in a manner that is sufficient for independent analysis.” This would include all information published in scholarly journals based on data that is not available as open source.
The bill is based on the premise that forcing EPA to use public data will inspire greater transparency by allowing “the crowd” to conduct independent analysis and interpretation. While the premise of involving the public in scientific research is sound, this characterization of crowdsourcing as a process separate from traditional scientific research is deeply problematic.
This division contrasts the current practices of many researchers, who use crowdsourcing to directly involve the public in scientific processes. Galaxy Zoo, for example, enlists digital volunteers (called “citizen scientists”) help classify more than 40 million photographs of galaxies taken by the Hubble Telescope. These crowdsourced morphological classifications are a powerful form of data analysis, a key aspect of the scientific process. Galaxy Zoo then publishes a catalogue of these classifications as an open-source data set. And the data reduction techniques and measures of confidence and bias for the data catalogue are documented in MNRAS, a peer-reviewed journal. A recent Google Scholar search shows that the data set published in MNRAS has been cited a remarkable 121 times.
As this example illustrates, crowdsourcing is often embedded in the process of formal scientific research. But prior to being published in a scientific journal, the crowdsourced contributions of non-professional volunteers are subject to the scrutiny of professional scientists through the rigorous process of peer review. Because peer review was designed as an institution to ensure objective and unbiased research, peer-reviewed scientific work is widely accepted as the best source of information for any science-based decision.
Separating crowdsourcing from the peer review process, as this legislation intends, means that there will be no formal filters in place to ensure that open data will not be abused by special interests. Ellen Silbergeld, a professor at John Hopkins University who testified at the hearing this week, made exactly this point when she pointed to data manipulation commonly practiced by tobacco lobbyists in the United States.
Contributing to scientific research is one goal of crowdsourcing for science. Involving the public in scientific research also increases volunteer understanding of research topics and the scientific process and inspires heightened community engagement. These goals are supported by President Obama’s Second Open Government National Action Plan, which calls for “increased crowdsourcing and citizen science programs” to support “an informed and active citizenry.” But H.R. 4012 does not support these goals. Rather, this legislation could further degrade the public’s understanding of science by encouraging the public to distrust professional scientists rather than collaborate with them.
Crowdsourcing benefits organizations by bringing in the unique expertise held by external volunteers, which can augment and enhance the traditional scientific process. In return, these volunteers benefit from exposure to new and exciting processes, such as scientific research. This mutually beneficial relationship depends on collaboration, not opposition. Supporting an antagonistic relationship between science-based organizations like the EPA and members of “the crowd” will benefit neither institutions, nor volunteers, nor the country as a whole.
 

The GovLab Index: Designing for Behavior Change


Please find below the latest installment in The GovLab Index series, inspired by the Harper’s Index. “The GovLab Index: Designing for Behavior Change” explores the recent application of psychology and behavioral economics towards solving social issues and shaping public policy and programs. Previous installments include The Networked Public, Measuring Impact with Evidence, Open Data, The Data Universe, Participation and Civic Engagement and Trust in Institutions.

  • Year the Behavioural Insights or “Nudge” Team was established by David Cameron in the U.K.: 2010
  • Amount saved by the U.K. Courts Service a year by sending people owing fines personalized text messages to persuade them to pay promptly since the creation of the Nudge unit: £30m
    • Entire budget for the Behavioural Insights Team: less than £1 million
    • Estimated reduction in bailiff interventions through the use of personalized text reminders: 150,000 fewer interventions annually
  • Percentage increase among British residents who paid their taxes on time when they received a letter saying that most citizens in their neighborhood pay their taxes on time: 15%
  • Estimated increase in organ-donor registrations in the U.K. if people are asked “If you needed an organ transplant, would you take one?”: 96,000
  • Proportion of employees who now have a workplace pension since the U.K. government switched from opt-in to opt-out (illustrating the power of defaults): 83%, 63% before opt-out
  • Increase in 401(k) enrollment rates within the U.S. by changing the default from ‘opt in’ to ‘opt out’: from 13% to 80%
  • Behavioral studies have shown that consumers overestimate savings from credit cards with no annual fees. Reduction in overall borrowing costs to consumers by requiring card issuers to tell consumers how much it would cost them in fees and interest, under the 2009 CARD Act in the U.S.: 1.7% of average daily balances 
  • Many high school students and their families in the U.S. find financial aid forms for college complex and thus delay filling them out. Increase in college enrollment as a result of being helped to complete the FAFSA financial aid form by an H&R tax professional, who then provided immediate estimates of the amount of aid the student was eligible for, and the net tuition cost of four nearby public colleges: 26%
  • How much more likely people are to keep accounting records, calculate monthly revenues, and separate their home and business books if given “rules of thumb”-based training with regards to managing their finances, according to a randomized control trial conducted in a bank in the Dominican Republic: 10%
  • Elderly Americans are asked to choose from over 40 options when enrolling in Medicaid Part D private drug plans. How many switched plans to save money when they received a letter providing information about three plans that would be cheaper for them: almost double 
    • The amount saved on average per person by switching plans due to this intervention: $150 per year
  • Increase in prescriptions to manage cardiac disease when Medicaid enrollees are sent a suite of behavioral nudges such as more salient description of the consequences of remaining untreated and post-it note reminders during an experiment in the U.S.: 78%
  • Reduction in street-litter when a trail of green footprints leading to nearby garbage cans is stenciled on the ground during an experiment in Copenhagen, Denmark: 46%
  • Reduction in missed National Health Service appointments in the U.K. when patients are asked to fill out their own appointment cards: 18%
    • Reduction in missed appointments when patients are also made aware of the number of people who attend their appointments on time: 31%
    • The cost of non-attendance per year for the National Health Service: £700m 
  • How many people in a U.S. experiment chose to ‘downsize’ their meals when asked, regardless of whether they received a discount for the smaller portion: 14-33%
    • Average reduction in calories as a result of downsizing: 200
  • Number of households in the U.K. without properly insulated attics, leading to high energy consumption and bills: 40%
    • Result of offering group discounts to motivate households to insulate their attics: no effect
    • Increase in households that agreed to insulate their attics when offered loft-clearing services even though they had to pay for the service: 4.8 fold increase

Full list and sources at http://thegovlab.org/the-govlab-index-designing-for-behavior-change/
 

Innovating for the Global South: New book offers practical insights


Press Release: “Despite the vast wealth generated in the last half century, in today’s world inequality is worsening and poverty is becoming increasingly chronic. Hundreds of millions of people continue to live on less than $2 per day and lack basic human necessities such as nutritious food, shelter, clean water, primary health care, and education.
Innovating for the Global South: Towards an Inclusive Innovation Agenda, the latest book from Rotman-UTP Publishing and the first volume in the Munk Series on Global Affairs, offers fresh solutions for reducing poverty in the developing world. Highlighting the multidisciplinary expertise of the University of Toronto’s Global Innovation Group, leading experts from the fields of engineering, public health, medicine, management, and public policy examine the causes and consequences of endemic poverty and the challenges of mitigating its effects from the perspective of the world’s poorest of the poor.
Can we imagine ways to generate solar energy to run essential medical equipment in the countryside? Can we adapt information and communication technologies to provide up-to-the-minute agricultural market prices for remote farming villages? How do we create more inclusive innovation processes to hear the voices of those living in urban slums? Is it possible to reinvent a low-cost toilet that operates beyond the water and electricity grids?
Motivated by the imperatives of developing, delivering, and harnessing innovation in the developing world, Innovating for the Global South is essential reading for managers, practitioners, and scholars of development, business, and policy.
“As we see it, Innovating for the Global South is fundamentally about innovating scalable solutions that mitigate the effects of poverty and underdevelopment in the Global South. It is not about inventing some new gizmo for some untapped market in the developing world,” say Profs. Dilip Soman and Joseph Wong of the UofT, who are two of the editors of the volume.
The book is edited and also features contributions by three leading UofT thinkers who are tackling innovation in the global south from three different academic perspectives.

  • Dilip Soman is Corus Chair in Communication Strategy and a professor of Marketing at the Rotman School of Management.
  • Janice Gross Stein is the Belzberg Professor of Conflict Management in the Department of Political Science and Director of the Munk School of Global Affairs.
  • Joseph Wong is Ralph and Roz Halbert Professor of Innovation at the Munk School of Global Affairs and Canada Research Chair in Democratization, Health, and Development in the Department of Political Science.

The chapters in the book address the process of innovation from a number of vantage points.
Introduction: Rethinking Innovation – Joseph Wong and Dilip Soman
Chapter 1: Poverty, Invisibility, and Innovation – Joseph Wong
Chapter 2: Behaviourally Informed Innovation – Dilip Soman
Chapter 3: Appropriate Technologies for the Global South – Yu-Ling Cheng (University of Toronto, Chemical Engineering and Applied Chemistry) and Beverly Bradley (University of Toronto, Centre for Global Engineering)
Chapter 4: Globalization of Biopharmaceutical Innovation: Implications for Poor-Market Diseases – Rahim Rezaie (University of Toronto, Munk School of Global Affairs, Research Fellow)
Chapter 5: Embedded Innovation in Health – Anita M. McGahan (University of Toronto, Rotman School of Management, Associate Dean of Research), Rahim Rezaie and Donald C. Cole (University of Toronto, Dalla Lana School of Public Health)
Chapter 6: Scaling Up: The Case of Nutritional Interventions in the Global South – Ashley Aimone Phillips (Registered Dietitian), Nandita Perumal (University of Toronto, Doctoral Fellow, Epidemiology), Carmen Ho (University of Toronto, Doctoral Fellow, Political Science), and Stanley Zlotkin (University of Toronto and the Hospital for Sick Children,Paediatrics, Public Health Sciences and Nutritional Sciences)
Chapter 7: New Models for Financing Innovative Technologies and Entrepreneurial Organizations in the Global South – Murray R. Metcalfe (University of Toronto, Centre for Global Engineering, Globalization)
Chapter 8: Innovation and Foreign Policy – Janice Gross Stein
Conclusion: Inclusive Innovation – Will Mitchell (University of Toronto, Rotman School of Management, Strategic Management), Anita M. McGahan”

Index: Designing for Behavior Change


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on designing for behavior change and was originally published in 2014.

  • Year the Behavioural Insights or “Nudge” Team was established by David Cameron in the U.K.: 2010
  • Amount saved by the U.K. Courts Service a year by sending people owing fines personalized text messages to persuade them to pay promptly since the creation of the Nudge unit: £30m
    • Entire budget for the Behavioural Insights Team: less than £1 million
    • Estimated reduction in bailiff interventions through the use of personalized text reminders: 150,000 fewer interventions annually
  • Percentage increase among British residents who paid their taxes on time when they received a letter saying that most citizens in their neighborhood pay their taxes on time: 15%
  • Estimated increase in organ-donor registrations in the U.K. if people are asked “If you needed an organ transplant, would you take one?”: 96,000
  • Proportion of employees who now have a workplace pension since the U.K. government switched from opt-in to opt-out (illustrating the power of defaults): 83%, 63% before opt-out
  • Increase in 401(k) enrollment rates within the U.S. by changing the default from ‘opt in’ to ‘opt out’: from 13% to 80%
  • Behavioral studies have shown that consumers overestimate savings from credit cards with no annual fees. Reduction in overall borrowing costs to consumers by requiring card issuers to tell consumers how much it would cost them in fees and interest, under the 2009 CARD Act in the U.S.: 1.7% of average daily balances 
  • Many high school students and their families in the U.S. find financial aid forms for college complex and thus delay filling them out. Increase in college enrollment as a result of being helped to complete the FAFSA financial aid form by an H&R tax professional, who then provided immediate estimates of the amount of aid the student was eligible for, and the net tuition cost of four nearby public colleges: 26%
  • How much more likely people are to keep accounting records, calculate monthly revenues, and separate their home and business books if given “rules of thumb”-based training with regards to managing their finances, according to a randomized control trial conducted in a bank in the Dominican Republic: 10%
  • Elderly Americans are asked to choose from over 40 options when enrolling in Medicaid Part D private drug plans. How many switched plans to save money when they received a letter providing information about three plans that would be cheaper for them: almost double 
    • The amount saved on average per person by switching plans due to this intervention: $150 per year
  • Increase in prescriptions to manage cardiac disease when Medicaid enrollees are sent a suite of behavioral nudges such as more salient description of the consequences of remaining untreated and post-it note reminders during an experiment in the U.S.: 78%
  • Reduction in street-litter when a trail of green footprints leading to nearby garbage cans is stenciled on the ground during an experiment in Copenhagen, Denmark: 46%
  • Reduction in missed National Health Service appointments in the U.K. when patients are asked to fill out their own appointment cards: 18%
    • Reduction in missed appointments when patients are also made aware of the number of people who attend their appointments on time: 31%
    • The cost of non-attendance per year for the National Health Service: £700m 
  • How many people in a U.S. experiment chose to ‘downsize’ their meals when asked, regardless of whether they received a discount for the smaller portion: 14-33%
    • Average reduction in calories as a result of downsizing: 200
  • Number of households in the U.K. without properly insulated attics, leading to high energy consumption and bills: 40%
    • Result of offering group discounts to motivate households to insulate their attics: no effect
    • Increase in households that agreed to insulate their attics when offered loft-clearing services even though they had to pay for the service: 4.8 fold increase

Sources

Why SayIt is (partly) a statement about the future of Open Data


Tom Steinberg from MySociety: “This is where SayIt comes in, as an example of a relatively low-cost approach to making sure that the next generation of government IT systems do produce Open Data.
SayIt is a newly launched open source tool for publishing transcripts of trials, debates, interviews and so on. It publishes them online in a way that matches modern expectations about how stuff should work on the web – responsive, searchable and so on. It’s being built as a Poplus Component, which means it’s part of an international network of groups collaborating on shared technologies. Here’s JK Rowling being interviewed, published via SayIt.
But how does this little tool relate to the business of getting governments to release more Open Data? Well, SayIt isn’t just about publishing data, it’s about making it too – in a few months we’ll be sharing an authoring interface for making new transcripts from whatever source a user has access to.
We hope that having iterated and improved this authoring interface, SayIt can become the tool of choice for public sector transcribers, replacing whatever tool they use today (almost certainly Word). Then, if they use SayIt to make a transcript, instead of Word, then it will produce new, instantly-online Open Data every time they use it….
But we can’t expect the public sector to use a tool like SayIt to make new Open Data unless it is cheaper, better and less burdensome than whatever they’re using now. We can’t – quite simply – expect to sell government procurement officers a new product mainly on the virtues of Open Data.  This means the tough task of persuading government employees that there is a new tool that is head-and-shoulders better than Excel or Word for certain purposes: formidable, familiar products that are much better than their critics like to let on.
So in order for SayIt to replace the current tools used by any current transcriber, it’s going to have to be really, really good. And really trustworthy. And it’s going to have to be well marketed. And that’s why we’ve chosen to build SayIt as an international, open source collaboration – as a Poplus Component. Because we think that without the billions of dollars it takes to compete with Microsoft, our best hope is to develop very narrow tools that do 0.01% of what Word does, but which do that one thing really really well. And our key strategic advantage, other than the trust that comes with Open Source and Open Standards, is the energy of the global civic hacking and government IT reform sector. SayIt is far more likely to succeed if it has ideas and inputs from contributors from around the world.

Regardless of whether or not SayIt ever succeeds in penetrating inside governments, this post is about an idea that such an approach represents. The idea is that people can advance the Open Data agenda not just by lobbying, but also by building and popularising tools that mean that data is born open in the first place. I hope this post will encourage more people to work on such tools, either on your own, or via collaborations like Poplus.”

Big Data and the Future of Privacy


John Podesta at the White House blog: “Last Friday, the President spoke to the American people, and the international community, about how to keep us safe from terrorism in a changing world while upholding America’s commitment to liberty and privacy that our values and Constitution require. Our national security challenges are real, but that is surely not the only space where changes in technology are altering the landscape and challenging conceptions of privacy.
That’s why in his speech, the President asked me to lead a comprehensive review of the way that “big data” will affect the way we live and work; the relationship between government and citizens; and how public and private sectors can spur innovation and maximize the opportunities and free flow of this information while minimizing the risks to privacy. I will be joined in this effort by Secretary of Commerce Penny Pritzker, Secretary of Energy Ernie Moniz, the President’s Science Advisor John Holdren, the President’s Economic Advisor Gene Sperling and other senior government officials.
I would like to explain a little bit more about the review, its scope, and what you can expect over the next 90 days.
We are undergoing a revolution in the way that information about our purchases, our conversations, our social networks, our movements, and even our physical identities are collected, stored, analyzed and used. The immense volume, diversity and potential value of data will have profound implications for privacy, the economy, and public policy. The working group will consider all those issues, and specifically how the present and future state of these technologies might motivate changes in our policies across a range of sectors.
When we complete our work, we expect to deliver to the President a report that anticipates future technological trends and frames the key questions that the collection, availability, and use of “big data” raise – both for our government, and the nation as a whole. It will help identify technological changes to watch, whether those technological changes are addressed by the U.S.’s current policy framework and highlight where further government action, funding, research and consideration may be required.
This is going to be a collaborative effort. The President’s Council of Advisors on Science and Technology (PCAST) will conduct a study to explore in-depth the technological dimensions of the intersection of big data and privacy, which will feed into this broader effort. Our working group will consult with industry, civil liberties groups, technologists, privacy experts, international partners, and other national and local government officials on the significance of and future for these technologies. Finally, we will be working with a number of think tanks, academic institutions, and other organizations around the country as they convene stakeholders to discuss these very issues and questions. Likewise, many abroad are analyzing and responding to the challenge and seizing the opportunity of big data. These discussions will help to inform our study.
While we don’t expect to answer all these questions, or produce a comprehensive new policy in 90 days, we expect this work to serve as the foundation for a robust and forward-looking plan of action. Check back on this blog for updates on how you can get involved in the debate and for status updates on our progress.”

How Government Can Make Open Data Work


Joel Gurin in Information Week: “At the GovLab at New York University, where I am senior adviser, we’re taking a different approach than McKinsey’s to understand the evolving value of government open data: We’re studying open data companies from the ground up. I’m now leading the GovLab’s Open Data 500 project, funded by the John S. and James L. Knight Foundation, to identify and examine 500 American companies that use government open data as a key business resource.
Our preliminary results show that government open data is fueling companies both large and small, across the country, and in many sectors of the economy, including health, finance, education, energy, and more. But it’s not always easy to use this resource. Companies that use government open data tell us it is often incomplete, inaccurate, or trapped in hard-to-use systems and formats.
It will take a thorough and extended effort to make government data truly useful. Based on what we are hearing and the research I did for my book, here are some of the most important steps the federal government can take, starting now, to make it easier for companies to add economic value to the government’s data.
1. Improve data quality
The Open Data Policy not only directs federal agencies to release more open data; it also requires them to release information about data quality. Agencies will have to begin improving the quality of their data simply to avoid public embarrassment. We can hope and expect that they will do some data cleanup themselves, demand better data from the businesses they regulate, or use creative solutions like turning to crowdsourcing for help, as USAID did to improve geospatial data on its grantees.
 
 

2. Keep improving open data resources
The government has steadily made Data.gov, the central repository of federal open data, more accessible and useful, including a significant relaunch last week. To the agency’s credit, the GSA, which administers Data.gov, plans to keep working to make this key website still better. As part of implementing the Open Data Policy, the administration has also set up Project Open Data on GitHub, the world’s largest community for open-source software. These resources will be helpful for anyone working with open data either inside or outside of government. They need to be maintained and continually improved.
3. Pass DATA
The Digital Accountability and Transparency Act would bring transparency to federal government spending at an unprecedented level of detail. The Act has strong bipartisan support. It passed the House with only one dissenting vote and was unanimously approved by a Senate committee, but still needs full Senate approval and the President’s signature to become law. DATA is also supported by technology companies who see it as a source of new open data they can use in their businesses. Congress should move forward and pass DATA as the logical next step in the work that the Obama administration’s Open Data Policy has begun.
4. Reform the Freedom of Information Act
Since it was passed in 1966, the federal Freedom of Information Act has gone through two major revisions, both of which strengthened citizens’ ability to access many kinds of government data. It’s time for another step forward. Current legislative proposals would establish a centralized web portal for all federal FOIA requests, strengthen the FOIA ombudsman’s office, and require agencies to post more high-interest information online before they receive formal requests for it. These changes could make more information from FOIA requests available as open data.
5. Engage stakeholders in a genuine way
Up to now, the government’s release of open data has largely been a one-way affair: Agencies publish datasets that they hope will be useful without consulting the organizations and companies that want to use it. Other countries, including the UK, France, and Mexico, are building in feedback loops from data users to government data providers, and the US should, too. The Open Data Policy calls for agencies to establish points of contact for public feedback. At the GovLab, we hope that the Open Data 500 will help move that process forward. Our research will provide a basis for new, productive dialogue between government agencies and the businesses that rely on them.
6. Keep using federal challenges to encourage innovation
The federal Challenge.gov website applies the best principles of crowdsourcing and collective intelligence. Agencies should use this approach extensively, and should pose challenges using the government’s open data resources to solve business, social, or scientific problems. Other approaches to citizen engagement, including federally sponsored hackathons and the White House Champions of Change program, can play a similar role.
Through the Open Data Policy and other initiatives, the Obama administration has set the right goals. Now it’s time to implement and move toward what US CTO Todd Park calls “data liberation.” Thousands of companies, organizations, and individuals will benefit.”

Don’t believe the hype about behavioral economics


Allison Schrager: “I have a confession to make: I think behavioral economics is over-rated. Recently, Nobelist Robert Shiller called on economists to incorporate more psychology into their work. While there are certainly things economists can learn from psychology and other disciplines to enrich their understanding of the economy, this approach is not a revolution in economics. Often models that incorporate richer aspects of human behavior are the same models economists always use—they simply rationalize seemingly irrational behavior. Even if we can understand why people don’t always act rationally, it’s not clear if that can lead to better economic policy and regulation.

Mixing behavioral economics and policy raises two questions: should we change behavior and if so, can we? Sometimes people make bad choices—they under-save, take on too much debt or risk. These behaviors appear irrational and lead to bad outcomes, which would seem to demand more regulation. But if these choices reflect individuals’ preferences and values can we justify changing their behavior? Part of a free-society is letting people make bad choices, as long as his or her irrational economic behavior doesn’t pose costs to others. For example: Someone who under-saves may wind up dependent on taxpayers for financial support. High household debt has been associated with a weaker economy

It’s been argued that irrational economic behavior merits regulation to encourage or force choices that will benefit both the individual and the economy as a whole. But the limits of these policies are apparent in a new OECD report on the application of behavioral economics to policy. The report gives examples of regulations adopted by different OECD countries that draw on insights from behavioral economics. Thus it’s disappointing that, with all economists have learned studying behavioral economics the last ten years,   the big changes in regulation seem limited to more transparent fee disclosure, a ban on automatically selling people more goods than they explicitly ask for, and standard disclosures fees and energy use. These are certainly good policies. But is this a result of behavioral economics (helping consumers over-come behavioral bias that leads to sub-optimal choices) or is it simply requiring banks and merchants to be more honest?

Poor risk management and short-term thinking on Wall Street nearly took down the entire financial system. Can what we know about behavioral finance regulate Wall Street? According to Shiller, markets are inefficient and misprice assets because of behavioral biases (over-confidence, under-reaction to news, home bias). This leads to speculative bubbles. But it’s not clear what financial regulation can do to curb this behavior. According Gene Fama, Shiller’s co-laureate who believes markets are rational, (Disclosure: I used to work at Dimensional Fund Advisors where Fama is a consultant and shareholder) it’s not possible to systematically separate “irrational” behavior (that distorts prices) from healthy speculation, which aids price discovery. If speculators (who have an enormous financial interest) don’t know better, how can we expect regulators to?…

So far, the most promising use of behavioral economics has been in retirement saving. Automatically enrolling people into a company pension plan and raising their saving rates has been found to increase savings—especially among people not inclined to save. That is probably why the OECD report concedes behavioral economics has had the biggest impact in retirement saving….

The OECD report cites some other new policies based on behavioral science like the the 2009 CARD act in America. Credit card statements used to only list the minimum required payment, which people may have interpreted as a suggested payment plan and wound up taking years to pay their balance, incurring large fees. Now, in the US, statements must include how much it will cost to pay your balance within 36 months and the time and cost required to repay your balance if you pay the minimum. It’s still too early to see how this will impact behavior, but a 2013 study suggests it will offer modest savings to consumers, perhaps because the bias to under-value the future still exists.

But what’s striking from the OECD report is, when it comes to behavioral biases that contributed to the financial crisis (speculation on housing, too much housing debt, under-estimating risk), few policies have used what we’ve learned.”

How should we analyse our lives?


Gillian Tett in the Financial Times on the challenge of using the new form of data science: “A few years ago, Alex “Sandy” Pentland, a professor of computational social sciences at MIT Media Lab, conducted a curious experiment at a Bank of America call centre in Rhode Island. He fitted 80 employees with biometric devices to track all their movements, physical conversations and email interactions for six weeks, and then used a computer to analyse “some 10 gigabytes of behaviour data”, as he recalls.
The results showed that the workers were isolated from each other, partly because at this call centre, like others of its ilk, the staff took their breaks in rotation so that the phones were constantly manned. In response, Bank of America decided to change its system to enable staff to hang out together over coffee and swap ideas in an unstructured way. Almost immediately there was a dramatic improvement in performance. “The average call-handle time decreased sharply, which means that the employees were much more productive,” Pentland writes in his forthcoming book Social Physics. “[So] the call centre management staff converted the break structure of all their call centres to this new system and forecast a $15m per year productivity increase.”
When I first heard Pentland relate this tale, I was tempted to give a loud cheer on behalf of all long-suffering call centre staff and corporate drones. Pentland’s data essentially give credibility to a point that many people know instinctively: that it is horribly dispiriting – and unproductive – to have to toil in a tiny isolated cubicle by yourself all day. Bank of America deserves credit both for letting Pentland’s team engage in this people-watching – and for changing its coffee-break schedule in response.
But there is a bigger issue at stake here too: namely how academics such as Pentland analyse our lives. We have known for centuries that cultural and social dynamics influence how we behave but until now academics could usually only measure this by looking at micro-level data, which were often subjective. Anthropology (a discipline I know well) is a case in point: anthropologists typically study cultures by painstakingly observing small groups of people and then extrapolating this in a subjective manner.

Pentland and others like him are now convinced that the great academic divide between “hard” and “soft” sciences is set to disappear, since researchers these days can gather massive volumes of data about human behaviour with precision. Sometimes this information is volunteered by individuals, on sites such as Facebook; sometimes it can be gathered from the electronic traces – the “digital breadcrumbs” – that we all deposit (when we use a mobile phone, say) or deliberately collected with biometric devices like the ones used at Bank of America. Either way, it can enable academics to monitor and forecast social interaction in a manner we could never have dreamed of before. “Social physics helps us understand how ideas flow from person to person . . . and ends up shaping the norms, productivity and creative output of our companies, cities and societies,” writes Pentland. “Just as the goal of traditional physics is to understand how the flow of energy translates into change in motion, social physics seems to understand how the flow of ideas and information translates into changes in behaviour….

But perhaps the most important point is this: whether you love or hate this new form of data science, the genie cannot be put back in the bottle. The experiments that Pentland and many others are conducting at call centres, offices and other institutions across America are simply the leading edge of a trend.

The only question now is whether these powerful new tools will be mostly used for good (to predict traffic queues or flu epidemics) or for more malevolent ends (to enable companies to flog needless goods, say, or for government control). Sadly, “social physics” and data crunching don’t offer any prediction on this issue, even though it is one of the dominant questions of our age.”

From funding agencies to scientific agency –


New paper on “Collective allocation of science funding as an alternative to peer review”: “Publicly funded research involves the distribution of a considerable amount of money. Funding agencies such as the US National Science Foundation (NSF), the US National Institutes of Health (NIH) and the European Research Council (ERC) give billions of dollars or euros of taxpayers’ money to individual researchers, research teams, universities, and research institutes each year. Taxpayers accordingly expect that governments and funding agencies will spend their money prudently and efficiently.

Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from. Nevertheless, most funders rely on a tried and tested method in line with the tradition of the scientific community: the peer review of individual proposals to identify the most promising projects for funding. This method has been considered the gold standard for assessing the scientific value of research projects essentially since the end of the Second World War.

However, there is mounting critique of the use of peer review to direct research funding. High on the list of complaints is the cost, both in terms of time and money. In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals [1]. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Scientists spend an increasing amount of time writing and submitting grant proposals. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place.

Peer review may also be subject to biases, inconsistencies, and oversights. The need for review panels to reach consensus may lead to sub‐optimal decisions owing to the inherently stochastic nature of the peer review process. Moreover, in a period where the money available to fund research is shrinking, reviewers may tend to “play it safe” and select proposals that have a high chance of producing results, rather than more challenging and ambitious projects. Additionally, the structuring of funding around calls‐for‐proposals to address specific topics might inhibit serendipitous discovery, as scientists work on problems for which funding happens to be available rather than trying to solve more challenging problems.

The scientific community holds peer review in high regard, but it may not actually be the best possible system for identifying and supporting promising science. Many proposals have been made to reform funding systems, ranging from incremental changes to peer review—including careful selection of reviewers [2] and post‐hoc normalization of reviews [3]—to more radical proposals such as opening up review to the entire online population [4] or removing human reviewers altogether by allocating funds through an objective performance measure [5].

We would like to add another alternative inspired by the mathematical models used to search the internet for relevant information: a highly decentralized funding model in which the wisdom of the entire scientific community is leveraged to determine a fair distribution of funding. It would still require human insight and decision‐making, but it would drastically reduce the overhead costs and may alleviate many of the issues and inefficiencies of the proposal submission and peer review system, such as bias, “playing it safe”, or reluctance to support curiosity‐driven research.

Our proposed system would require funding agencies to give all scientists within their remit an unconditional, equal amount of money each year. However, each scientist would then be required to pass on a fixed percentage of their previous year’s funding to other scientists whom they think would make best use of the money (Fig 1). Every year, then, scientists would receive a fixed basic grant from their funding agency combined with an elective amount of funding donated by their peers. As a result of each scientist having to distribute a given percentage of their previous year’s budget to other scientists, money would flow through the scientific community. Scientists who are generally anticipated to make the best use of funding will accumulate more.”