Policy bubbles: What factors drive their birth, maturity and death?


Moshe Maor at LSE Blog: “A policy bubble is a real or perceived policy overreaction that is reinforced by positive feedback over a relatively long period of time. This type of policy imposes objective and/or perceived social costs without producing offsetting objective and/or perceived benefits over a considerable length of time. A case in point is when government spending over a policy problem increases due to public demand for more policy while the severity of the problem decreases over an extended period of time. Another case is when governments raise ‘green’ or other standards due to public demand while the severity of the problem does not justify this move…
Drawing on insights from a variety of fields – including behavioural economics, psychology, sociology, political science and public policy – three phases of the life-cycle of a policy bubble may be identified: birth, maturity and death. A policy bubble may emerge when certain individuals perceive opportunities to gain from public policy or to exploit it by rallying support for the policy, promoting word-of-mouth enthusiasm and widespread endorsement of the policy, heightening expectations for further policy, and increasing demand for this policy….
How can one identify a policy bubble? A policy bubble may be identified by measuring parliamentary concerns, media concerns, public opinion regarding the policy at hand, and the extent of a policy problem, against the budget allocation to said policy over the same period, preferably over 50 years or more. Measuring the operation of different transmission mechanisms in emotional contagion and human herding, particularly the spread of social influence and feeling, can also work to identify a policy bubble.
Here, computer-aided content analysis of verbal and non-verbal communication in social networks, especially instant messaging, may capture emotional and social contagion. A further way to identify a policy bubble revolves around studying bubble expectations and individuals’ confidence over time by distributing a questionnaire to a random sample of the population, experts in the relevant policy sub-field, as well as decision makers, and comparing the results across time and nations.
To sum up, my interpretation of the process that leads to the emergence of policy bubbles allows for the possibility that different modes of policy overreaction lead to different types of human herding, thereby resulting in different types of policy bubbles. This interpretation has the added benefit of contributing to the explanation of economic, financial, technological and social bubbles as well”

Using predictive analytics and rapid-cycle evaluation to improve program design and results


An interview with Scott Cody, Vice President, Mathematica Policy Research and GovInnovator Blog  (Podcast: Play in new window | Download): “What are predictive analytics and rapid-cycle evaluation and how can public agencies and programs use them to improve program delivery and outcomes? To explore these questions, we’re joined by Scott Cody. He’s a Vice President of Mathematica Policy Research and the co-author, with Andrew Asher, of a recent paper “Smarter, Better, Faster: The Potential for Predictive Analytics and Rapid-Cycle Evaluation to Improve Program Development and Outcomes,” published by the Hamilton Project at the Brookings Institution.”

The Myth of Everybody


at Medium: “What is the difference between “with” and “for”? “With” implies togetherness, a network: a larger group, possibly, a messier group, but a group (meaning 2 people+) nonetheless. Acting “with” others implies certain degrees of collaboration, collective action, coordination, and even unity. You run a three-legged race with your partner (or you’re going to fall). When you use the word “with” it means that, however many people are involved, whatever their individual roles, they’re acting as one — or at least, towards a shared goal.

By contrast, when we use the word “for” we center on the experience of individuals in a relationship, with one unit acting on behalf of or doing something to another. (“For another.”) In the “for” universe, there’s usually a receiver and a giver. There can be many people involved or few, but there are almost always actors and those acted upon. In a democracy like ours, where we have government of, by, and for the people, we understand that when we vote for an elected representative, they are then empowered to speak and act for us. To govern for us….but with our consent.

Representative democracy in action.

At least, that’s the way it’s described in textbooks. In reality, however, governance is awash with intermediaries: companies, contractors, public/private partnerships, lobbyists, NGOs, think tanks — organizations of people, formal and informal, that support, distribute, and sometimes do the work of our government for our government and for us. This (very simplified overview of our) system of proxies isn’t necessarily good or bad; it’s just the way we’ve structured things to work in the US.

Why? Well, because we govern in a “for” system. Because there are so many of us and our lives are interconnected. Because we balance majority rule with minority rights. Because of all the reasons you learned in social studies class (if you went to a US public high school) and because this is the way most of us believes society has to work.

But there are other ways.

— Take your hand off the “COMMUNIST” alarm. I’m talking about the “civic” revolution.

In the last 6 or so years, as the buzz around “Gov2.0” waned, obsession with “civic”-ness waxed. What “civic” means exactly, well, we’re all still figuring that out. Sure, there are official definitions that relate “civic” to all things local…and overlapping understandings of “civics” that lend the air of government involvement…but with increasing interest from folks in the tech and innovation sectors (and funders), the word has taken on new shape. Today, “civic” is the center of a Venn Diagram encircling notions commonly associated with “society,” “community,” “governance,” and public commons (or goods). The sheen of social impact, social responsibility, and “community-ness” — that’s what terms of art like “civic innovation,” “civic engagement,” “civic decisions,” “civic participation”, and “civic tech” are all trying to describe.

To be clear, it’s not that this intersection of societal something hasn’t been outlined before: language like “social” (see “social innovation”) and civil (see “civil society”) has been used to describe similar concepts for decades. “Civic” is just the newest coat of paint, its popularity driven in part by interest from NGOs, start-ups, digital strategists, and governing bodies attempting to bring new flavor and energy to long-standing questions, like

How can we make democracy work? What can we do to make the systems in place work better? And what do we need to change to make systems work better for everybody?…”

The Quiet Revolution: Open Data Is Transforming Citizen-Government Interaction


Maury Blackman at Wired: “The public’s trust in government is at an all-time low. This is not breaking news.
But what if I told you that just this past May, President Obama signed into law a bill that passed Congress with unanimous support. A bill that could fundamentally transform the way citizens interact with their government. This legislation could also create an entirely new, trillion-dollar industry right here in the U.S. It could even save lives.
On May 9th, the Digital Accountability and Transparency Act of 2014 (DATA Act) became law. There were very few headlines, no Rose Garden press conference.
I imagine most of you have never heard of the DATA Act. The bill with the nerdy name has the potential to revolutionize government. It requires federal agencies to make their spending data available in standardized, publicly accessible formats.  Supporters of the legislation included Tea Partiers and the most liberal Democrats. But the bill is only scratches the surface of what’s possible.
So What’s the Big Deal?
On his first day in Office, President Obama signed a memorandum calling for a more open and transparent government. The President wrote, “Openness will strengthen our democracy and promote efficiency and effectiveness in Government.” This was followed by the creation of Data.gov, a one-stop shop for all government data. The site does not just include financial data, but also a wealth of other information related to education, public safety, climate and much more—all available in open and machine-readable format. This has helped fuel an international movement.
Tech minded citizens are building civic apps to bring government into the digital age; reporters are now more able to connect the dots easier, not to mention the billions of taxpayer dollars saved. And last year the President took us a step further. He signed an Executive Order making open government data the default option.
Cities and states have followed Washington’s lead with similar open data efforts on the local level. In San Francisco, the city’s Human Services Agency has partnered with Promptly; a text message notification service that alerts food stamp recipients (CalFresh) when they are at risk of being disenrolled from the program. This service is incredibly beneficial, because most do not realize any change in status, until they are in the grocery store checkout line, trying to buy food for their family.
Other products and services created using open data do more than just provide an added convenience—they actually have the potential to save lives. The PulsePoint mobile app sends text messages to citizens trained in CPR when someone in walking distance is experiencing a medical emergency that may require CPR. The app is currently available in almost 600 cities in 18 states, which is great. But shouldn’t a product this valuable be available to every city and state in the country?…”

This Exercise App Tracks Trends on How We Move In Different Cities


Mark Byrnes at CityLab: “An app designed to encourage exercise can also tell us a lot about the way different cities get from point A to B.
The app, called Human, runs in the background of your iPhone, automatically detecting activities like walking, cycling, running, and motorized transport. The point is to encourage you to exercise for at least 30 minutes a day.
Almost a year since Human launched (last August), its developers have released stunning visualization of all that movement: 7.5 million miles traveled by their app users so far.
On their site, you can look into the mobility data inside 30 different cities. Once you click on one, you’ll be greeted with a pie chart that shows the distribution of activity within that city lined up against a pie chart that shows the international average.
In the case of Amsterdam, its transportation clichés are verified. App users in the bike-loving city use two wheels way more than they use four. And they walk about as much as anywhere else:

Human then shows the paths traveled by their users. When it comes to Amsterdam, the results look almost exactly like the city’s entire street grid, no matter what physical activity is being shown:

Powerful new patent service shows every US invention, and a new view of R&D relationships


at GigaOm: “The website for the U.S. Patent Office website is famously clunky: searching and sorting patents can feel like playing an old Atari game, rather than watching innovation at work. But now a young inventor has come along with a tool to build a better patent office.
The service is called Trea, and was launched by Max Yuan, an engineer who received a patent of his own for a bike motor in 2007. After writing a tool to download patents related to his own invention, he expanded the process to slurp every patent and image in the USPTO database, and compile the information in a user-friendly interface.
Trea has been in beta for a while, but will formally launch on Wednesday. The tool not only provides an easy way to see what inventions a company or inventor is patenting, but also shows the fields in which they are most active. Here is a screenshot from Trea that shows what Apple has been up to in the last 12 months:
Trea screenshot of Apple inventions
Such information could be valuable to investors or to companies that want to use the filings as a way to track what might be in their competitors’ product pipelines. The Trea database also probes the USPTO for new filings, and can send alerts to subscribers. Yuan has also created a Twitter account just for new Apple filings.
Trea also draws on the patent database to display what Yuan calls a “unified knowledge graph” of relationships between inventors. Pictures, like the one below for IBM, show clusters of inventors and, at a broader level, the viral transmission of human ideas within a company:
Trea IBM screenshot
 
This type of information, gleaned from patent filings, could be valuable to corporate strategists, or to journalists, scholars or business historians. And making government websites more user-friendly, as Rankandfiled.com is attempting to do with Securities and Exchange Commission filings, can certainly help people understand what their regulators are doing….”

How to harness the wisdom of crowds to improve public service delivery and policymaking


Eddie Copeland in PolicyBytes: “…In summary, government has used technology to streamline transactions and better understand the public’s opinions. Yet it has failed to use it to radically change the way it works. Have public services been reinvented? Is government smaller and leaner? Have citizens, businesses and civic groups been offered the chance to take part in the work of government and improve their own communities? On all counts the answer is unequivocally, no. What is needed, therefore, is a means to enable citizens to provide data to government to inform policymaking and to improve – or even help deliver – public services. What is needed is a Government Data Marketplace.

Government Data Marketplace

A Government Data Marketplace (GDM) would be a website that brought together public sector bodies that needed data, with individuals, businesses and other organisations that could provide it. Imagine an open data portal in reverse: instead of government publishing its own datasets to be used by citizens and businesses, it would instead publish its data needs and invite citizens, businesses or community groups to provide that data (for free or in return for payment). Just as open data portals aim to provide datasets in standard, machine-readable formats, GDM would operate according to strict open standards, and provide a consistent and automated way to deliver data to government through APIs.
How would it work? Imagine a local council that wished to know where instances of graffiti occurred within its borough. The council would create an account on GDM and publish a new request, outlining the data it required (not dissimilar to someone posting a job on a site like Freelancer). Citizens, businesses and other organisations would be able to view that request on GDM and bid to offer the service. For example, an app-development company could offer to build an app that would enable citizens to photograph and locate instances of graffiti in the borough. The app would be able to upload the data to GDM. The council could connect its own IT system to GDM to pass the data to their own database.
Importantly, the app-development company would specify via GDM how much it would charge to provide the data. Other companies and organisations could offer competing bids for delivering the same – or an even better service – at different prices. Supportive local civic hacker groups could even offer to provide the data for free. Either way, the council would get the data it needed without having to collect it for itself, whilst also ensuring it paid the best price from a number of competing providers.
Since GDM would be a public marketplace, other local authorities would be able to see that a particular company had designed a graffiti-reporting solution for one council, and could ask for the same data to be collected in their own boroughs. This would be quick and easy for the developer, as instead of having to create a bespoke solution to work with each council’s IT system, they could connect to all of them using one common interface via GDM. That would good for the company, as they could sell to a much larger market (the same solution would work for one council or all), and good for the councils, as they would benefit from cheaper prices generated from economies of scale. And since GDM would use open standards, if a council was unhappy with the data provided by one supplier, it could simply look to another company to provide the same information.
What would be the advantages of such a system? Firstly, innovation. GDM would free government from having to worry about what software it needed, and instead allow it to focus on the data it required to provide a service. To be clear: councils themselves do not need a graffiti app – they need data on where graffiti is. By focusing attention on its data needs, the public sector could let the market innovate to find the best solutions for providing it. That might be via an app, perhaps via a website, social media, or Internet of Things sensors, or maybe even using a completely new service that collected information in a radically different way. It will not matter – the right information would be provided in a common format via GDM.
Secondly, the potential cost savings of this approach would be many and considerable. At the very least, by creating a marketplace, the public sector would be able to source data at a competitive price. If several public sector bodies needed the same service via GDM, companies providing that data would be able to offer much cheaper prices for all, as instead of having to deal with hundreds of different organisations (and different interfaces) they could create one solution that worked for all of them. As prices became cheaper for standard solutions, this would in turn encourage more public sector bodies to converge on common ways of working, driving down costs still further. Yet these savings would be dwarfed by those possible if GDM could be used to source data that public sectors bodies currently have to manually collect themselves. Imagine if instead of having teams of inspectors to locate instances X, Y or Z, it could instead source the same data from citizens via GDM?
There would no limit to the potential applications to which GDM could be put by central and local government and other public sector bodies: for graffiti, traffic levels, environmental issues, education or welfare. It could be used to crowdsource facts, figures, images, map coordinates, text – anything that can be collected as data. Government could request information on areas on which it previously had none, helping them to assign their finite resources and money in a much more targeted way. New York City’s Mayor’s Office of Data Analytics has demonstrated that up to 500% increases in the efficiency of providing some public services can be achieved, if only the right data is available.
For the private sector, GDM would stimulate the growth of innovative new companies offering community data, and make it easier for them to sell data solutions across the whole of the public sector. They could pioneer in new data methods, and potentially even take over the provision of entire services which the public sector currently has to provide itself. For citizens, it would offer a means to genuinely get involved in solving issues that matter to their local communities, either by using apps made by businesses, or working to provide the data themselves.
And what about the benefits for policymaking? It is important to acknowledge that the idea of harnessing the wisdom of crowds for policymaking is currently experimental. In the case of Policy Futures Markets, some applications have also been considered to be highly controversial. So which methods would be most effective? What would they look like? In what policy domains would they provide most value? The simple fact is that we do not know. What is certain, however, is that innovation in open policymaking and crowdsourcing ideas will never be achieved until a platform is available that allows such ideas to be tried and tested. GDM could be that platform.
Public sector bodies could experiment with asking citizens for information or answers to particular, fact-based questions, or even for predictions on future outcomes, to help inform their policymaking activities. The market could then innovate to develop solutions to source that data from citizens, using the many different models for harnessing the wisdom of crowds. The effectiveness of those initiatives could then be judged, and the techniques honed. In the worst case scenario that it did not work, money would not have been wasted on building the wrong platform – GDM would continue to have value in providing data for public service needs as described above….”

Interpreting Hashtag Politics – Policy Ideas in an Era of Social Media


New book by Stephen Jeffares: “Why do policy actors create branded terms like Big Society and does launching such policy ideas on Twitter extend or curtail their life? This book argues that the practice of hashtag politics has evolved in response to an increasingly congested and mediatised environment, with the recent and rapid growth of high speed internet connections, smart phones and social media. It examines how policy analysis can adapt to offer interpretive insights into the life and death of policy ideas in an era of hashtag politics.
This text reveals that policy ideas can at the same time be ideas, instruments, visions, containers and brands, and advises readers on how to tell if a policy idea is dead or dying, how to map the diversity of viewpoints, how to capture the debate, when to engage and when to walk away. Each chapter showcases innovative analytic techniques, illustrated by application to contemporary policy ideas.”

OkCupid reveals it’s been lying to some of its users. Just to see what’ll happen.


Brian Fung in the Washington Post: “It turns out that OkCupid has been performing some of the same psychological experiments on its users that landed Facebook in hot water recently.
In a lengthy blog post, OkCupid cofounder Christian Rudder explains that OkCupid has on occasion played around with removing text from people’s profiles, removing photos, and even telling some users they were an excellent match when in fact they were only a 30 percent match according to the company’s systems. Just to see what would happen.
OkCupid defends this behavior as something that any self-respecting Web site would do.
“OkCupid doesn’t really know what it’s doing. Neither does any other Web site,” Rudder wrote. “But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”…
we have a bigger problem on our hands: A problem about how to reconcile the sometimes valuable lessons of data science with the creep factor — particularly when you aren’t notified about being studied. But as I’ve written before, these kinds of studies happen all the time; it’s just rare that the public is presented with the results.
Short of banning the practice altogether, which seems totally unrealistic, corporate data science seems like an opportunity on a number of levels, particularly if it’s disclosed to the public. First, it helps us understand how human beings tend to behave at Internet scale. Second, it tells us more about how Internet companies work. And third, it helps consumers make better decisions about which services they’re comfortable using.
I suspect that what bothers us most of all is not that the research took place, but that we’re slowly coming to grips with how easily we ceded control over our own information — and how the machines that collect all this data may all know more about us than we do ourselves. We had no idea we were even in a rabbit hole, and now we’ve discovered we’re 10 feet deep. As many as 62.5 percent of Facebook users don’t know the news feed is generated by a company algorithm, according to a recent study conducted by Christian Sandvig, an associate professor at the University of Michigan, and Karrie Karahalios, an associate professor at the University of Illinois.
OkCupid’s blog post is distinct in several ways from Facebook’s psychological experiment. OkCupid didn’t try to publish its findings in a scientific journal. It isn’t even claiming that what it did was science. Moreover, OkCupid’s research is legitimately useful to users of the service — in ways that Facebook’s research is arguably not….
But in any case, there’s no such motivating factor when it comes to Facebook. Unless you’re a page administrator or news organization, understanding how the newsfeed works doesn’t really help the average user in the way that understanding how OkCupid works does. That’s because people use Facebook for all kinds of reasons that have nothing to do with Facebook’s commercial motives. But people would stop using OkCupid if they discovered it didn’t “work.”
If you’re lying to your users in an attempt to improve your service, what’s the line between A/B testing and fraud?”

The Social Laboratory


Shane Harris in Foreign Policy: “…, Singapore has become a laboratory not only for testing how mass surveillance and big-data analysis might prevent terrorism, but for determining whether technology can be used to engineer a more harmonious society….Months after the virus abated, Ho and his colleagues ran a simulation using Poindexter’s TIA ideas to see whether they could have detected the outbreak. Ho will not reveal what forms of information he and his colleagues used — by U.S. standards, Singapore’s privacy laws are virtually nonexistent, and it’s possible that the government collected private communications, financial data, public transportation records, and medical information without any court approval or private consent — but Ho claims that the experiment was very encouraging. It showed that if Singapore had previously installed a big-data analysis system, it could have spotted the signs of a potential outbreak two months before the virus hit the country’s shores. Prior to the SARS outbreak, for example, there were reports of strange, unexplained lung infections in China. Threads of information like that, if woven together, could in theory warn analysts of pending crises.
The RAHS system was operational a year later, and it immediately began “canvassing a range of sources for weak signals of potential future shocks,” one senior Singaporean security official involved in the launch later recalled.
The system uses a mixture of proprietary and commercial technology and is based on a “cognitive model” designed to mimic the human thought process — a key design feature influenced by Poindexter’s TIA system. RAHS, itself, doesn’t think. It’s a tool that helps human beings sift huge stores of data for clues on just about everything. It is designed to analyze information from practically any source — the input is almost incidental — and to create models that can be used to forecast potential events. Those scenarios can then be shared across the Singaporean government and be picked up by whatever ministry or department might find them useful. Using a repository of information called an ideas database, RAHS and its teams of analysts create “narratives” about how various threats or strategic opportunities might play out. The point is not so much to predict the future as to envision a number of potential futures that can tell the government what to watch and when to dig further.
The officials running RAHS today are tight-lipped about exactly what data they monitor, though they acknowledge that a significant portion of “articles” in their databases come from publicly available information, including news reports, blog posts, Facebook updates, and Twitter messages. (“These articles have been trawled in by robots or uploaded manually” by analysts, says one program document.) But RAHS doesn’t need to rely only on open-source material or even the sorts of intelligence that most governments routinely collect: In Singapore, electronic surveillance of residents and visitors is pervasive and widely accepted…”