Secrecy and Publicity in Votes and Debates


Book edited by Jon Elster: “In the spirit of Jeremy Bentham’s Political Tactics, this volume offers the first comprehensive discussion of the effects of secrecy and publicity on debates and votes in committees and assemblies. The contributors – sociologists, political scientists, historians, and legal scholars – consider the micro-technology of voting (the devil is in the detail), the historical relations between the secret ballot and universal suffrage, the use and abolition of secret voting in parliamentary decisions, and the sometimes perverse effects of the drive for greater openness and transparency in public affairs. The authors also discuss the normative questions of secret versus public voting in national elections and of optimal mixes of secrecy and publicity, as well as the opportunities for strategic behavior created by different voting systems. Together with two previous volumes on Collective Wisdom (Cambrige, 2012) and Majority Decisions (Cambridge, 2014), the book sets a new standard for interdisciplinary work on collective decision-making….(More)”

Forging Trust Communities: How Technology Changes Politics


Book by Irene S. Wu: “Bloggers in India used social media and wikis to broadcast news and bring humanitarian aid to tsunami victims in South Asia. Terrorist groups like ISIS pour out messages and recruit new members on websites. The Internet is the new public square, bringing to politics a platform on which to create community at both the grassroots and bureaucratic level. Drawing on historical and contemporary case studies from more than ten countries, Irene S. Wu’s Forging Trust Communities argues that the Internet, and the technologies that predate it, catalyze political change by creating new opportunities for cooperation. The Internet does not simply enable faster and easier communication, but makes it possible for people around the world to interact closely, reciprocate favors, and build trust. The information and ideas exchanged by members of these cooperative communities become key sources of political power akin to military might and economic strength.

Wu illustrates the rich world history of citizens and leaders exercising political power through communications technology. People in nineteenth-century China, for example, used the telegraph and newspapers to mobilize against the emperor. In 1970, Taiwanese cable television gave voice to a political opposition demanding democracy. Both Qatar (in the 1990s) and Great Britain (in the 1930s) relied on public broadcasters to enhance their influence abroad. Additional case studies from Brazil, Egypt, the United States, Russia, India, the Philippines, and Tunisia reveal how various technologies function to create new political energy, enabling activists to challenge institutions while allowing governments to increase their power at home and abroad.

Forging Trust Communities demonstrates that the way people receive and share information through network communities reveals as much about their political identity as their socioeconomic class, ethnicity, or religion. Scholars and students in political science, public administration, international studies, sociology, and the history of science and technology will find this to be an insightful and indispensable work…(More)”

A computational algorithm for fact-checking


Kurzweil News: “Computers can now do fact-checking for any body of knowledge, according to Indiana University network scientists, writing in an open-access paper published June 17 in PLoS ONE.

Using factual information from summary infoboxes from Wikipedia* as a source, they built a “knowledge graph” with 3 million concepts and 23 million links between them. A link between two concepts in the graph can be read as a simple factual statement, such as “Socrates is a person” or “Paris is the capital of France.”

In the first use of this method, IU scientists created a simple computational fact-checker that assigns “truth scores” to statements concerning history, geography and entertainment, as well as random statements drawn from the text of Wikipedia. In multiple experiments, the automated system consistently matched the assessment of human fact-checkers in terms of the humans’ certitude about the accuracy of these statements.

Dealing with misinformation and disinformation

In what the IU scientists describe as an “automatic game of trivia,” the team applied their algorithm to answer simple questions related to geography, history, and entertainment, including statements that matched states or nations with their capitals, presidents with their spouses, and Oscar-winning film directors with the movie for which they won the Best Picture awards. The majority of tests returned highly accurate truth scores.

Lastly, the scientists used the algorithm to fact-check excerpts from the main text of Wikipedia, which were previously labeled by human fact-checkers as true or false, and found a positive correlation between the truth scores produced by the algorithm and the answers provided by the fact-checkers.

Significantly, the IU team found their computational method could even assess the truthfulness of statements about information not directly contained in the infoboxes. For example, the fact that Steve Tesich — the Serbian-American screenwriter of the classic Hoosier film “Breaking Away” — graduated from IU, despite the information not being specifically addressed in the infobox about him.

Using multiple sources to improve accuracy and richness of data

“The measurement of the truthfulness of statements appears to rely strongly on indirect connections, or ‘paths,’ between concepts,” said Giovanni Luca Ciampaglia, a postdoctoral fellow at the Center for Complex Networks and Systems Research in the IU Bloomington School of Informatics and Computing, who led the study….

“These results are encouraging and exciting. We live in an age of information overload, including abundant misinformation, unsubstantiated rumors and conspiracy theories whose volume threatens to overwhelm journalists and the public. Our experiments point to methods to abstract the vital and complex human task of fact-checking into a network analysis problem, which is easy to solve computationally.”

Expanding the knowledge base

Although the experiments were conducted using Wikipedia, the IU team’s method does not assume any particular source of knowledge. The scientists aim to conduct additional experiments using knowledge graphs built from other sources of human knowledge, such as Freebase, the open-knowledge base built by Google, and note that multiple information sources could be used together to account for different belief systems….(More)”

This App Lets You See The Tough Choices Needed To Balance Your City’s Budget


Jay Cassano at FastCoExist: “Ask the average person on the street how much money their city spends on education or health care or police. Even the most well-informed probably won’t be able to come up with a dollar amount. That’s because even if you are interested, municipal budgets aren’t presented in a way that makes sense to ordinary people.

Balancing Act is a web app that displays a straightforward pie chart of a city’s budget, broken down into categories like pensions, parks & recreations, police, and education. But it doesn’t just display the current budget breakdown. It invites users to tweak it, expressing their own priorities, all while keeping the city in the black. Do you want your libraries to be better funded? Fine—but you’re going to have to raise property taxes to do it.

“Balancing Act provides a way for people to both understand what public entities are doing and then to weight that against the other possible things that government can do,” says Chris Adams, president of Engaged Public, a Colorado-based consulting firm that develops technology for government and non-profits. “Especially in this era of information, all of us have a responsibility to spend a bit of time understanding how our government is spending money on our behalf.”

Hartford, Connecticut is the first city in the country that is using Balancing Act. The city was facing a $49 million budget deficit this spring, and Mayor Pedro Segarra says he took input from citizens using Balancing Act. Meanwhile, in Engaged Public’s home state, residents can input their income to generate an itemized tax receipt and then tweak the Colorado state budget as they see fit.

Engaged Public hopes that by making budgets more interactive and accessible, more people will take an interest in them.

“Budget information almost universally exists, but it’s not in accessible formats—mostly they’re in PDF files,” says Adams. “So citizens are invited to pour through tens of thousands of pages of PDFs. But that really doesn’t give you a high-level understanding of what’s at stake in a reasonable amount of time.”

If widely used, Balancing Act could be a useful tool for politicians to check the pulse of their constituents. For example, decreasing funding to parks draws a negative public reaction. But if enough people on Balancing Act experimented with the budget, saw the necessity of it, and submitted their recommendations, then an elected might be willing to make a decision that would otherwise seem politically risky….(More)”

Harnessing the Crowd to Solve Healthcare


PSFK Labs: “While being sick is never a good situation to be in, the majority of people can still take solace in the fact that modern medicine will be able to diagnose their problem and get them on the path to a quick recovery. For a small percentage of patients, however, simply finding out what ails them can be a challenge. Despite countless visits to specialists and mounting costs, these individuals can struggle for years to find out any reliable information about their illness.

This is only exacerbated by the fact that in a heavily regulated industry like healthcare, words like “personalization,” “transparency” and “collaboration” are near impossibilities, leaving these patients locked into a system that can’t care for them. Enter CrowdMed, an online platform that uses the combined knowledge of its community to overcome these obstacles, getting people the answers and treatment they need.

…we spoke with Jared Heyman, the company’s founder, to understand how the crowd can deliver unprecedented efficiencies to a system sorely in need of them…. “CrowdMed harnesses the wisdom of crowds to solve the world’s most difficult medical cases online. Let’s say that you’ve been bouncing doctor to doctor, but don’t yet have a definitive diagnosis or treatment plan. You can submit your case on our site by answering an in‑depth patient questionnaire, uploading relevant medical records, diagnostic test results or even medical images. We expose your case to our community of currently over 15,000 medical detectives. These are people mostly with medical backgrounds who enjoy solving these challenges.

We have about a 70 percent success rate, bringing patients closer to a direct diagnosis or cure and we do so in a very small fraction of the time and cost of what it would take through the traditional medical system….

Every entrepreneur builds upon the tools and technologies that preceded them. I think that CrowdMed needed the Internet. It needed Facebook. It needed Wikipedia. It needed Quora, and other companies or products that have proven that you can trust in the wisdom of the crowd. I think we’re built upon the shoulders of these other companies.

We looked at all these other companies that have proven the value of social networks through crowdsourcing, and that’s inspired us to do what we do. It’s been instructive for us in the best way to do it, and it’s also prepared society, psychologically and culturally, for what we’re doing. All these things were important….(More)”

Can We Focus on What Works?


John Kamensky in GovExec: “Can we shift the conversation in Washington from “waste, fraud, and abuse” to “what works and let’s fund it” instead?

I attended a recent Senate hearing on wasteful spending in the federal government, and some of the witnesses pointed to examples such as the legislative requirement that the Defense Department ship coal to Germany to heat American bases there. Others pointed to failures of large-scale computer projects and the dozens of programs on the Government Accountability Office’s High Risk List.

While many of the examples were seen as shocking, there was little conversation about focusing on what works and expanding those programs.

Interestingly, there is a movement underway across the U.S. to do just that. There are advocacy groups, foundations, states and localities promoting the idea of “let’s find out what works and fund it.” Some call this “evidence-based government,” “Moneyball government,” or “pay for success.” The federal government has dipped its toes in the water, a well, with several pilot programs in various agencies and bipartisan legislation pending in Congress.

The hot, new thing that has captured the imaginations of many policy wonks is called “Pay for Success,” or in some circles, “social impact bonds.”

In 2010, the British government launched an innovative funding scheme, which it called social impact bonds, where private sector investors committed funding upfront to pay for improved social outcomes that result in public sector savings. The investors were repaid by the government only when the outcomes were determined to have been achieved.

This funding scheme has attracted substantial attention in the U.S. where it and many variations are being piloted.

What is “Pay for Success?” According to the Urban Institute, PFS is a type of performance-based contracting used to support the delivery of targeted, high-impact preventive social services, in which intervention at an early stage can reduce the need for higher-cost services in the future.

For example, experts believe that preventing asthma attacks among at-risk children reduces emergency room visits and hospitalization, which are more costly than preventive services. When the government pays for preventive services, it hopes to lower its costs….(More)”

When Guarding Student Data Endangers Valuable Research


Susan M. Dynarski  in the New York Times: “There is widespread concern over threats to privacy posed by the extensive personal data collected by private companies and public agencies.

Some of the potential danger comes from the government: The National Security Agency has swept up the telephone records of millions of people, in what it describes as a search for terrorists. Other threats are posed by hackers, who have exploited security gaps to steal data from retail giantslike Target and from the federal Office of Personnel Management.

Resistance to data collection was inevitable — and it has been particularly intense in education.

Privacy laws have already been strengthened in some states, and multiple bills now pending in state legislatures and in Congress would tighten the security and privacy of student data. Some of this proposed legislation is so broadly written, however, that it could unintentionally choke off the use of student data for its original purpose: assessing and improving education. This data has already exposed inequities, allowing researchers and advocates to pinpoint where poor, nonwhite and non-English-speaking children have been educated inadequately by their schools.

Data gathering in education is indeed extensive: Across the United States, large, comprehensive administrative data sets now track the academic progress of tens of millions of students. Educators parse this data to understand what is working in their schools. Advocates plumb the data to expose unfair disparities in test scores and graduation rates, building cases to target more resources for the poor. Researchers rely on this data when measuring the effectiveness of education interventions.

To my knowledge there has been no large-scale, Target-like theft of private student records — probably because students’ test scores don’t have the market value of consumers’ credit card numbers. Parents’ concerns have mainly centered not on theft, but on the sharing of student data with third parties, including education technology companies. Last year, parentsresisted efforts by the tech start-up InBloom to draw data on millions of students into the cloud and return it to schools as teacher-friendly “data dashboards.” Parents were deeply uncomfortable with a third party receiving and analyzing data about their children.

In response to such concerns, some pending legislation would scale back the authority of schools, districts and states to share student data with third parties, including researchers. Perhaps the most stringent of these proposals, sponsored by Senator David Vitter, a Louisiana Republican, would effectively end the analysis of student data by outside social scientists. This legislation would have banned recent prominent research documenting the benefits of smaller classes, the value of excellent teachersand the varied performance of charter schools.

Under current law, education agencies can share data with outside researchers only to benefit students and improve education. Collaborations with researchers allow districts and states to tap specialized expertise that they otherwise couldn’t afford. The Boston public school district, for example, has teamed up with early-childhood experts at Harvard to plan and evaluate its universal prekindergarten program.

In one of the longest-standing research partnerships, the University of Chicago works with the Chicago Public Schools to improve education. Partnerships like Chicago’s exist across the nation, funded by foundations and the United States Department of Education. In one initiative, a Chicago research consortium compiled reports showing high school principals that many of the seniors they had sent off to college swiftly dropped out without earning a degree. This information spurred efforts to improve high school counseling and college placement.

Specific, tailored information in the hands of teachers, principals or superintendents empowers them to do better by their students. No national survey could have told Chicago’s principals how their students were doing in college. Administrative data can provide this information, cheaply and accurately…(More)”

Beating the news’ with EMBERS: Forecasting Civil Unrest using Open Source Indicators


Paper by Naren Ramakrishnan et al: “We describe the design, implementation, and evaluation of EMBERS, an automated, 24×7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the uptick and downtick of incidents during the June 2013 protests in Brazil. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings….(More)”

Introducing the Governance Data Alliance


“The overall assumption of the Governance Data Alliance is that governance data can contribute to improved sustainable economic and human development outcomes and democratic accountability in all countries. The contribution that governance data will make to those outcomes will of course depend on a whole range of issues that will vary across contexts; development processes, policy processes, and the role that data plays vary considerably. Nevertheless, there are some core requirements that need to be met if data is to make a difference, and articulating them can provide a framework to help us understand and improve the impact that data has on development and accountability across different contexts.

We also collectively make another implicit (and important) assumption: that the current state of affairs is vastly insufficient when it comes to the production and usage of high-quality governance data. In other words, the status quo needs to be significantly improved upon. Data gathered from participants in the April 2014 design session help to paint that picture in granular terms. Data production remains highly irregular and ad hoc; data usage does not match data production in many cases (e.g. users want data that don’t exist and do not use data that is currently produced); production costs remain high and inconsistent across producers despite possibilities for economies of scale; and feedback loops between governance data producers and governance data users are either non-existent or rarely employed. We direct readers to http://dataalliance.globalintegrity.org for a fuller treatment of those findings.

Three requirements need to be met if governance data is to lead to better development and accountability outcomes, whether those outcomes are about core “governance” issues such as levels of inclusion, or about service delivery and human development outcomes that may be shaped by the quality of governance. Those requirements are:

  • The availability of governance data.
  • The quality of governance data, including its usability and salience.
  • The informed use of governance data.

(Or to use the metaphor of markets, we face a series of market failures: supply of data is inconsistent and not uniform; user demand cannot be efficiently channeled to suppliers to redirect their production to address those deficiencies; and transaction costs abound through non-existent data standards and lack of predictability.)

If data are not available about those aspects of governance that are expected to have an impact on development outcomes and democratic accountability, no progress will be made. The risk is that data about key issues will be lacking, or that there will be gaps in coverage, whether country coverage, time periods covered, or sectors, or that data sets produced by different actors may not be comparable. This might come about for reasons including the following: a lack of knowledge – amongst producers, and amongst producers and users – about what data is needed and what data is available; high costs, and limited resources to invest in generating data; and, institutional incentives and structures (e.g. lack of autonomy, inappropriate mandate, political suppression of sensitive data, organizational dysfunction – relating, for instance, to National Statistical Offices) that limit the production of governance data….

What A Governance Data Alliance Should Do (Or, Making the Market Work)

During the several months of creative exploration around possibilities for a Governance Data Alliance, dozens of activities were identified as possible solutions (in whole or in part) to the challenges identified above. This note identifies what we believe to be the most important and immediate activities that an Alliance should undertake, knowing that other activities can and should be rolled into an Alliance work plan in the out years as the initiative matures and early successes (and failures) are achieved and digested.

A brief summary of the proposals that follow:

  1. Design and implement a peer-to-peer training program between governance data producers to improve the quality and salience of existing data.
  2. Develop a lightweight data standard to be adopted by producer organizations to make it easier for users to consume governance data.
  3. Mine the 2014 Reform Efforts Survey to understand who actually uses which governance data, currently, around the world.
  4. Leverage the 2014 Reform Efforts Survey “plumbing” to field customized follow-up surveys to better assess what data users seek in future governance data.
  5. Pilot (on a regional basis) coordinated data production amongst producer organizations to fill coverage gaps, reduce redundancies, and respond to actual usage and user preferences….(More) “