Crowdsourcing Solutions and Crisis Information during the Renaissance


Patrick Meier: “Clearly, crowdsourcing is not new, only the word is. After all, crowdsourcing is a methodology, not a technology nor an industry. Perhaps one of my favorite examples of crowdsourcing during the Renaissance surrounds the invention of the marine chronometer, which completely revolutionized long distance sea travel. Thousands of lives were being lost in shipwrecks because longitude coordinates were virtually impossible to determine in the open seas. Finding a solution this problem became critical as the Age of Sail dawned on many European empires.

So the Spanish King, Dutch Merchants and others turned to crowdsourcing by offering major prize money for a solution. The British government even launched the “Longitude Prize” which was established through an Act of Parliament in 1714 and administered by the “Board of Longitude.” This board brought together the greatest scientific minds of the time to work on the problem, including Sir Isaac Newton. Galileo was also said to have taken up the challenge.

The main prizes included: “£10,000 for a method that could determine longitude within 60 nautical miles (111 km); £15,000 for a method that could determine longitude within 40 nautical miles (74 km); and £20,000 for a method that could determine longitude within 30 nautical miles (56 km).” Note that £20,000 in 1714 is around $4.7 million dollars today. The $1 million Netflix Prize launched 400 years later pales in comparison.” In addition, the Board had the discretion to make awards to persons who were making significant contributions to the effort or to provide financial support to those who were working towards a solution. The Board could also make advances of up to £2,000 for experimental work deemed promising.”

Interestingly, the person who provided the most breakthroughs—and thus received the most prize money—was the son of a carpenter, the self-educated British clockmaker John Harrison.  And so, as noted by Peter LaMotte, “by allowing anyone to participate in solving the problem, a solution was found for a puzzle that had baffled some of the brightest minds in history (even Galileo!). In the end, it was found by someone who would never have been tapped to solve it to begin with.”…(More)”

In the future, Big Data will make actual voting obsolete


Robert Epstein at Quartz: “Because I conduct research on how the Internet affects elections, journalists have lately been asking me about the primaries. Here are the two most common questions I’ve been getting:

  • Do Google’s search rankings affect how people vote?
  • How well does Google Trends predict the winner of each primary?

My answer to the first question is: Probably, but no one knows for sure. From research I have been conducting in recent years with Ronald E. Robertson, my associate at the American Institute for Behavioral Research and Technology, on the Search Engine Manipulation Effect (SEME, pronounced “seem”), we know that when higher search results make one candidate look better than another, an enormous number of votes will be driven toward the higher-ranked candidate—up to 80% of undecided voters in some demographic groups. This is partly because we have all learned to trust high-ranked search results, but it is mainly because we are lazy; search engine users generally click on just the top one or two items.

Because no one actually tracks search rankings, however—they are ephemeral and personalized, after all, which makes them virtually impossible to track—and because no whistleblowers have yet come forward from any of the search engine companies,

We cannot know for sure whether search rankings are consistently favoring one candidate or another.This means we also cannot know for sure how search rankings are affecting elections. We know the power they have to do so, but that’s it.
As for the question about Google Trends, for a while I was giving a mindless, common-sense answer: Well, I said, Google Trends tells you about search activity, and if lots more people are searching for “Donald Trump” than for “Ted Cruz” just before a primary, then more people will probably vote for Trump.

When you run the numbers, search activity seems to be a pretty good predictor of voting. On primary day in New Hampshire this year, search traffic on Google Trends was highest for Trump, followed by John Kasich, then Cruz—and so went the vote. But careful studies of the predictive power of search activity have actually gotten mixed results. A 2011 study by researchers at Wellesley College in Massachusetts, for example, found that Google Trends was a poor predictor of the outcomes of the 2008 and 2010 elections.

So much for Trends. But then I got to thinking: Why are we struggling so hard to figure out how to use Trends or tweets or shares to predict elections when Google actually knows exactly how we are going to vote. Impossible, you say? Think again….

This leaves us with two questions, one small and practical and the other big and weird.

The small, practical question is: How is Google using those numbers? Might they be sharing them with their preferred presidential candidate, for example? That is not unlawful, after all, and Google executives have taken a hands-on role in past presidential campaigns. The Wall Street Journal reported, for example, that Eric Schmidt, head of Google at that time, was personally overseeing Barack Obama’s programming team at his campaign headquarters the night before the 2012 election.
And the big, weird question is: Why are we even bothering to vote?
 Voting is such a hassle—the parking, the lines, the ID checks. Maybe we should all stay home and just let Google announce the winners….(More)”

UN-Habitat Urban Data Portal


Data Driven Journalism:UN-Habitat has launched a new web portal featuring a wealth of city data based on its repository of research on urban trends.

Launched during the 25th Governing Council, the Urban Data Portal allows users to explore data from 741 cities in 220 countries, and compare these for 103 indicators such as slum prevalence and city prosperity.

compare.PNG
Image: A comparison of share in national urban population and average annual rate of urban population change for San Salvador, El Salvador, and Asuncion, Paraguay.

The urban indicators data available are analyzed, compiled and published by UN-Habitat’s Global Urban Observatory, which supports governments, local authorities and civil society organizations to develop urban indicators, data and statistics.

Leveraging GIS technology, the Observatory collects data by taking aerial photographs, zooming into particular areas, and then sending in survey teams to answer any remaining questions about the area’s urban development.

The Portal also contains data collected by national statistics authorities, via household surveys and censuses, with analysis conducted by leading urbanists in UN-HABITAT’s State of the World’s Cities and the Global Report on Human Settlements report series.

For the first time, these datasets are available for use under an open licence agreement, and can be downloaded in straightforward database formats like CSV and JSON….(More)

Poli-hobbyism: A Theory of Mass Politics


Eitan D. Hersh: “For many citizens, participation in politics is not motivated by civic duty or selfinterest, but by hobbyism: the objective is self-gratification. I offer a theory of political hobbyism, situate the theory in existing literature, and define and distinguish the hobbyist motivation from its alternatives. I argue that the prevalence of political hobbyism depends on historical conditions related to the nature of leisure time, the openness of the political process to mass participation, and the level of perceived threat. I articulate an empirical research agenda, highlighting how poli-hobbyism can help explain characteristics of participants, forms of participation, rates of participation, and the nature of partisanship. Political hobbyism presents serious problems for a functioning democracy, including participants confusing high stakes for low stakes, participation too focused on the gratifying aspects of politics, and unnecessarily potent partisan rivalries….(More)”

Countable


Countable: “Why does it have to be so hard to understand what our lawmakers are up to?

With Countable, it doesn’t.

Countable makes it quick and easy to understand the laws Congress is considering. We also streamline the process of contacting your lawmaker, so you can tell them how you want them to vote on bills under consideration.

You can use Countable to:

  • Read clear and succinct summaries of upcoming and active legislation.
  • Directly tell your lawmakers how to vote on those bills by clicking “Yea” or “Nay”.
  • Follow up on how your elected officials voted on bills, so you can hold them accountable in the next election cycle….(More)”

Data innovation: where to start? With the road less taken


Giulio Quaggiotto at Nesta: “Over the past decade we’ve seen an explosion in the amount of data we create, with more being captured about our lives than ever before. As an industry, the public sector creates an enormous amount of information – from census data to tax data to health data. When it comes to use of the data however, despite many initiatives trying to promote open and big data for public policy as well as evidence-based policymaking, we feel there is still a long way to go.

Why is that? Data initiatives are often created under the assumption that if data is available, people (whether citizens or governments) will use it. But this hasn’t necessarily proven to be the case, and this approach neglects analysis of power and an understanding of the political dynamics at play around data (particularly when data is seen as an output rather than input).

Many data activities are also informed by the ‘extractive industry’ paradigm: citizens and frontline workers are seen as passive ‘data producers’ who hand over their information for it to be analysed and mined behind closed doors by ‘the experts’.

Given budget constraints facing many local and central governments, even well intentioned initiatives often take an incremental, passive transparency approach (i.e. let’s open the data first then see what happens), or they adopt a ‘supply/demand’ metaphor to data provision and usage…..

As a response to these issues, this blog series will explore the hypothesis that putting the question of citizen and government agency – rather than openness, volume or availability – at the centre of data initiatives has the potential to unleash greater, potentially more disruptive innovation and to focus efforts (ultimately leading to cost savings).

Our argument will be that data innovation initiatives should be informed by the principles that:

  • People closer to the problem are the best positioned to provide additional context to the data and potentially act on solutions (hence the importance of “thick data“).

  • Citizens are active agents rather than passive providers of ‘digital traces’.

  • Governments are both users and providers of data.

  • We should ask at every step of the way how can we empower communities and frontline workers to take better decisions over time, and how can we use data to enhance the decision making of every actor in the system (from government to the private sector, from private citizens to social enterprises) in their role of changing things for the better… (More)

 

The Wisdom of the Many in Global Governance: An Epistemic-Democratic Defence of Diversity and Inclusion


Paper by Stevenson, H. : “Over the past two decades, a growing body of literature has highlighted moral reasons for taking global democracy seriously. This literature justifies democracy on the grounds of its intrinsic value. But democracy also has instrumental value: the rule of the many is epistemically superior to the rule of one or the rule of the few. This paper draws on the tradition of epistemic democracy to develop an instrumentalist justification for democratizing global governance. The tradition of epistemic democracy is enjoying a renaissance within political theory and popular non-fiction, yet its relevance for international relations remains unexplored. I develop an epistemic-democratic framework for evaluating political institutions, which is constituted by three principles. The likelihood of making correct decisions within institutions of global governance will be greater when (1) human development and capacity for participation is maximised; (2) the internal cognitive diversity of global institutions is maximised; and (3) public opportunities for sharing objective and subjective knowledge are maximised. Applying this framework to global governance produces a better understanding of the nature and extent of the ‘democratic deficit’ of global governance, as well as the actions required to address this deficit….(More)”

Ethical Reasoning in Big Data


Book edited by Collmann, Jeff, and Matei, Sorin Adam: “This book springs from a multidisciplinary, multi-organizational, and multi-sector conversation about the privacy and ethical implications of research in human affairs using big data. The need to cultivate and enlist the public’s trust in the abilities of particular scientists and scientific institutions constitutes one of this book’s major themes. The advent of the Internet, the mass digitization of research information, and social media brought about, among many other things, the ability to harvest – sometimes implicitly – a wealth of human genomic, biological, behavioral, economic, political, and social data for the purposes of scientific research as well as commerce, government affairs, and social interaction. What type of ethical dilemmas did such changes generate? How should scientists collect, manipulate, and disseminate this information? The effects of this revolution and its ethical implications are wide-ranging.

This book includes the opinions of myriad investigators, practitioners, and stakeholders in big data on human beings who also routinely reflect on the privacy and ethical issues of this phenomenon. Dedicated to the practice of ethical reasoning and reflection in action, the book offers a range of observations, lessons learned, reasoning tools, and suggestions for institutional practice to promote responsible big data research on human affairs. It caters to a broad audience of educators, researchers, and practitioners. Educators can use the volume in courses related to big data handling and processing. Researchers can use it for designing new methods of collecting, processing, and disseminating big data, whether in raw form or as analysis results. Lastly, practitioners can use it to steer future tools or procedures for handling big data. As this topic represents an area of great interest that still remains largely undeveloped, this book is sure to attract significant interest by filling an obvious gap in currently available literature. …(More)”

Addressing the ‘doctrine gap’: professionalising the use of Information Communication Technologies in humanitarian action


Nathaniel A. Raymond and Casey S. Harrity at HPN: “This generation of humanitarian actors will be defined by the actions they take in response to the challenges and opportunities of the digital revolution. At this critical moment in the history of humanitarian action, success depends on humanitarians recognising that the use of information communication technologies (ICTs) must become a core competency for humanitarian action. Treated in the past as a boutique sub-area of humanitarian practice, the central role that they now play has made the collection, analysis and dissemination of data derived from ICTs and other sources a basic skill required of humanitarians in the twenty-first century. ICT use must now be seen as an essential competence with critical implications for the efficiency and effectiveness of humanitarian response.

Practice in search of a doctrine

ICT use for humanitarian response runs the gamut from satellite imagery to drone deployment; to tablet and smartphone use; to crowd mapping and aggregation of big data. Humanitarian actors applying these technologies include front-line responders in NGOs and the UN but also, increasingly, volunteers and the private sector. The rapid diversification of available technologies as well as the increase in actors utilising them for humanitarian purposes means that the use of these technologies has far outpaced the ethical and technical guidance available to practitioners. Technology adoption by humanitarian actors prior to the creation of standards for how and how not to apply a specific tool has created a largely undiscussed and unaddressed ‘doctrine gap’.

Examples of this gap are, unfortunately, many. One such is the mass collection of personally identifiable cell phone data by humanitarian actors as part of phone surveys and cash transfer programmes. Although initial best practice and lessons learned have been developed for this method of data collection, no common inter-agency standards exist, nor are there comprehensive ethical frameworks for what data should be retained and for how long, and what data should be anonymised or not collected in the first place…(More)”

Open Data Supply: Enriching the usability of information


Report by Phoensight: “With the emergence of increasing computational power, high cloud storage capacity and big data comes an eager anticipation of one of the biggest IT transformations of our society today.

Open data has an instrumental role to play in our digital revolution by creating unprecedented opportunities for governments and businesses to leverage off previously unavailable information to strengthen their analytics and decision making for new client experiences. Whilst virtually every business recognises the value of data and the importance of the analytics built on it, the ability to realise the potential for maximising revenue and cost savings is not straightforward. The discovery of valuable insights often involves the acquisition of new data and an understanding of it. As we move towards an increasing supply of open data, technological and other entrepreneurs will look to better utilise government information for improved productivity.

This report uses a data-centric approach to examine the usability of information by considering ways in which open data could better facilitate data-driven innovations and further boost our economy. It assesses the state of open data today and suggests ways in which data providers could supply open data to optimise its use. A number of useful measures of information usability such as accessibility, quantity, quality and openness are presented which together contribute to the Open Data Usability Index (ODUI). For the first time, a comprehensive assessment of open data usability has been developed and is expected to be a critical step in taking the open data agenda to the next level.

With over two million government datasets assessed against the open data usability framework and models developed to link entire country’s datasets to key industry sectors, never before has such an extensive analysis been undertaken. Government open data across Australia, Canada, Singapore, the United Kingdom and the United States reveal that most countries have the capacity for improvements in their information usability. It was found that for 2015 the United Kingdom led the way followed by Canada, Singapore, the United States and Australia. The global potential of government open data is expected to reach 20 exabytes by 2020, provided governments are able to release as much data as possible within legislative constraints….(More)”