Stefaan Verhulst
Paper by Joshua C. Gellers in International Environmental Agreements: Politics, Law and Economics: “To what extent can crowdsourcing help members of civil society overcome the democratic deficit in global environmental governance? In this paper, I evaluate the utility of crowdsourcing as a tool for participatory agenda-setting in the realm of post-2015 sustainable development policy. In particular, I analyze the descriptive representativeness (e.g., the degree to which participation mirrors the demographic attributes of non-state actors comprising global civil society) of participants in two United Nations orchestrated crowdsourcing processes—the MY World survey and e-discussions regarding environmental sustainability. I find that there exists a perceptible demographic imbalance among contributors to the MY World survey and considerable dissonance between the characteristics of participants in the e-discussions and those whose voices were included in the resulting summary report. The results suggest that although crowdsourcing may present an attractive technological approach to expand participation in global governance, ultimately the representativeness of that participation and the legitimacy of policy outputs depend on the manner in which contributions are solicited and filtered by international institutions….(More)”
Paper by Ricardo Perez-Truglia: “In 2001, Norwegian tax records became easily accessible online, allowing individuals to observe the incomes of others. Because of self-image and social-image concerns, higher income transparency can increase the differences in well-being between rich and poor. We test this hypothesis using survey data from 1985-2013. We identify the causal effect of income transparency on subjective well-being by using differences-in-differences, triple-differences, and event-study analyses. We find that higher income transparency increased the happiness gap between rich and poor by 29% and the life satisfaction gap by 21%. Additionally, higher income transparency corrected misperceptions about the income distribution and changed preferences for redistribution. Last, we use the estimates for back-of-the-envelope calculations of the value of self-image and social-image….(More)”
Nicole Wallace in The Chronicle of Philanthropy: “Answering text messages to a crisis hotline is different from handling customer-service calls: You don’t want counselors to answer folks in the order their messages were received. You want them to take the people in greatest distress first.
Crisis Text Line, a charity that provides counseling by text message, uses sophisticated data analysis to predict how serious the conversations are likely to be and ranks them by severity. Using an algorithm to automate triage ensures that people in crisis get help fast — with an unexpected side benefit for other texters contacting the hotline: shorter wait times.
When the nonprofit started in 2013, deciding which messages to take first was much more old-school. Counselors had to read all the messages in the queue and make a gut-level decision on which person was most in need of help.
“It was slow,” says Bob Filbin, the organization’s chief data scientist.
To solve the problem, Mr. Filbin and his colleagues used past messages to the hotline to create an algorithm that analyzes the language used in incoming messages and ranks them in order of predicted severity.
And it’s working. Since the algorithm went live on the platform, messages it marked as severe — code orange — led to conversations that were six times more likely to include thoughts of suicide or self-harm than exchanges started by other texts that weren’t marked code orange, and nine times more likely to have resulted in the counselor contacting emergency services to intervene in a suicide attempt.
Counselors don’t even see the queue of waiting texts anymore. They just click a button marked “Help Another Texter,” and the system connects them to the person whose message has been marked most urgent….(More)”
Andrew Young: “Today, The GovLab, in collaboration with founding partners mySociety and the World Bank’s Digital Engagement Evaluation Team are launching the Open Governance Research Exchange (OGRX), a new platform for sharing research and findings on innovations in governance.
From crowdsourcing to nudges to open data to participatory budgeting, more open and innovative ways to tackle society’s problems and make public institutions more effective are emerging. Yet little is known about what innovations actually work, when, why, for whom and under what conditions.
And anyone seeking existing research is confronted with sources that are widely dispersed across disciplines, often locked behind pay walls, and hard to search because of the absence of established taxonomies. As the demand to confront problems in new ways grows so too does the urgency for making learning about governance innovations more accessible.
As part of GovLab’s broader effort to move from “faith-based interventions” toward more “evidence-based interventions,” OGRX curates and makes accessible the most diverse and up-to-date collection of findings on innovating governance. At launch, the site features over 350 publications spanning a diversity of governance innovation areas, including but not limited to:
- Behavioral Science and Nudges
- Citizen Engagement and Crowdsourcing
- Civic Technology
- Data Analysis
- Expert Networking
- Labs and Experimentation
- Open Data…
Visit ogrx.org to explore the latest research findings, submit your own work for inclusion on the platform, and share knowledge with others interested in using science and technology to improve the way we govern. (More)”
Patrick Meier: “Clearly, crowdsourcing is not new, only the word is. After all, crowdsourcing is a methodology, not a technology nor an industry. Perhaps one of my favorite examples of crowdsourcing during the Renaissance surrounds the invention of the marine chronometer, which completely revolutionized long distance sea travel. Thousands of lives were being lost in shipwrecks because longitude coordinates were virtually impossible to determine in the open seas. Finding a solution this problem became critical as the Age of Sail dawned on many European empires.
So the Spanish King, Dutch Merchants and others turned to crowdsourcing by offering major prize money for a solution. The British government even launched the “Longitude Prize” which was established through an Act of Parliament in 1714 and administered by the “Board of Longitude.” This board brought together the greatest scientific minds of the time to work on the problem, including Sir Isaac Newton. Galileo was also said to have taken up the challenge.
The main prizes included: “£10,000 for a method that could determine longitude within 60 nautical miles (111 km); £15,000 for a method that could determine longitude within 40 nautical miles (74 km); and £20,000 for a method that could determine longitude within 30 nautical miles (56 km).” Note that £20,000 in 1714 is around $4.7 million dollars today. The $1 million Netflix Prize launched 400 years later pales in comparison.” In addition, the Board had the discretion to make awards to persons who were making significant contributions to the effort or to provide financial support to those who were working towards a solution. The Board could also make advances of up to £2,000 for experimental work deemed promising.”
Interestingly, the person who provided the most breakthroughs—and thus received the most prize money—was the son of a carpenter, the self-educated British clockmaker John Harrison. And so, as noted by Peter LaMotte, “by allowing anyone to participate in solving the problem, a solution was found for a puzzle that had baffled some of the brightest minds in history (even Galileo!). In the end, it was found by someone who would never have been tapped to solve it to begin with.”…(More)”
Robert Epstein at Quartz: “Because I conduct research on how the Internet affects elections, journalists have lately been asking me about the primaries. Here are the two most common questions I’ve been getting:
- Do Google’s search rankings affect how people vote?
- How well does Google Trends predict the winner of each primary?
My answer to the first question is: Probably, but no one knows for sure. From research I have been conducting in recent years with Ronald E. Robertson, my associate at the American Institute for Behavioral Research and Technology, on the Search Engine Manipulation Effect (SEME, pronounced “seem”), we know that when higher search results make one candidate look better than another, an enormous number of votes will be driven toward the higher-ranked candidate—up to 80% of undecided voters in some demographic groups. This is partly because we have all learned to trust high-ranked search results, but it is mainly because we are lazy; search engine users generally click on just the top one or two items.
Because no one actually tracks search rankings, however—they are ephemeral and personalized, after all, which makes them virtually impossible to track—and because no whistleblowers have yet come forward from any of the search engine companies,
We cannot know for sure whether search rankings are consistently favoring one candidate or another.This means we also cannot know for sure how search rankings are affecting elections. We know the power they have to do so, but that’s it.
As for the question about Google Trends, for a while I was giving a mindless, common-sense answer: Well, I said, Google Trends tells you about search activity, and if lots more people are searching for “Donald Trump” than for “Ted Cruz” just before a primary, then more people will probably vote for Trump.
When you run the numbers, search activity seems to be a pretty good predictor of voting. On primary day in New Hampshire this year, search traffic on Google Trends was highest for Trump, followed by John Kasich, then Cruz—and so went the vote. But careful studies of the predictive power of search activity have actually gotten mixed results. A 2011 study by researchers at Wellesley College in Massachusetts, for example, found that Google Trends was a poor predictor of the outcomes of the 2008 and 2010 elections.
So much for Trends. But then I got to thinking: Why are we struggling so hard to figure out how to use Trends or tweets or shares to predict elections when Google actually knows exactly how we are going to vote. Impossible, you say? Think again….
This leaves us with two questions, one small and practical and the other big and weird.
Data Driven Journalism: “UN-Habitat has launched a new web portal featuring a wealth of city data based on its repository of research on urban trends.
Launched during the 25th Governing Council, the Urban Data Portal allows users to explore data from 741 cities in 220 countries, and compare these for 103 indicators such as slum prevalence and city prosperity.
Image: A comparison of share in national urban population and average annual rate of urban population change for San Salvador, El Salvador, and Asuncion, Paraguay.
The urban indicators data available are analyzed, compiled and published by UN-Habitat’s Global Urban Observatory, which supports governments, local authorities and civil society organizations to develop urban indicators, data and statistics.
Leveraging GIS technology, the Observatory collects data by taking aerial photographs, zooming into particular areas, and then sending in survey teams to answer any remaining questions about the area’s urban development.
The Portal also contains data collected by national statistics authorities, via household surveys and censuses, with analysis conducted by leading urbanists in UN-HABITAT’s State of the World’s Cities and the Global Report on Human Settlements report series.
For the first time, these datasets are available for use under an open licence agreement, and can be downloaded in straightforward database formats like CSV and JSON….(More)
Eitan D. Hersh: “For many citizens, participation in politics is not motivated by civic duty or selfinterest, but by hobbyism: the objective is self-gratification. I offer a theory of political hobbyism, situate the theory in existing literature, and define and distinguish the hobbyist motivation from its alternatives. I argue that the prevalence of political hobbyism depends on historical conditions related to the nature of leisure time, the openness of the political process to mass participation, and the level of perceived threat. I articulate an empirical research agenda, highlighting how poli-hobbyism can help explain characteristics of participants, forms of participation, rates of participation, and the nature of partisanship. Political hobbyism presents serious problems for a functioning democracy, including participants confusing high stakes for low stakes, participation too focused on the gratifying aspects of politics, and unnecessarily potent partisan rivalries….(More)”
Countable: “Why does it have to be so hard to understand what our lawmakers are up to?
With Countable, it doesn’t.
Countable makes it quick and easy to understand the laws Congress is considering. We also streamline the process of contacting your lawmaker, so you can tell them how you want them to vote on bills under consideration.
You can use Countable to:
- Read clear and succinct summaries of upcoming and active legislation.
- Directly tell your lawmakers how to vote on those bills by clicking “Yea” or “Nay”.
- Follow up on how your elected officials voted on bills, so you can hold them accountable in the next election cycle….(More)”
Giulio Quaggiotto at Nesta: “Over the past decade we’ve seen an explosion in the amount of data we create, with more being captured about our lives than ever before. As an industry, the public sector creates an enormous amount of information – from census data to tax data to health data. When it comes to use of the data however, despite many initiatives trying to promote open and big data for public policy as well as evidence-based policymaking, we feel there is still a long way to go.
Why is that? Data initiatives are often created under the assumption that if data is available, people (whether citizens or governments) will use it. But this hasn’t necessarily proven to be the case, and this approach neglects analysis of power and an understanding of the political dynamics at play around data (particularly when data is seen as an output rather than input).
Many data activities are also informed by the ‘extractive industry’ paradigm: citizens and frontline workers are seen as passive ‘data producers’ who hand over their information for it to be analysed and mined behind closed doors by ‘the experts’.
Given budget constraints facing many local and central governments, even well intentioned initiatives often take an incremental, passive transparency approach (i.e. let’s open the data first then see what happens), or they adopt a ‘supply/demand’ metaphor to data provision and usage…..
As a response to these issues, this blog series will explore the hypothesis that putting the question of citizen and government agency – rather than openness, volume or availability – at the centre of data initiatives has the potential to unleash greater, potentially more disruptive innovation and to focus efforts (ultimately leading to cost savings).
Our argument will be that data innovation initiatives should be informed by the principles that:
-
People closer to the problem are the best positioned to provide additional context to the data and potentially act on solutions (hence the importance of “thick data“).
-
Citizens are active agents rather than passive providers of ‘digital traces’.
-
Governments are both users and providers of data.
-
We should ask at every step of the way how can we empower communities and frontline workers to take better decisions over time, and how can we use data to enhance the decision making of every actor in the system (from government to the private sector, from private citizens to social enterprises) in their role of changing things for the better… (More)