Wikipedia’s not as biased as you might think


Ananya Bhattacharya in Quartz: “The internet is as open as people make it. Often, people limit their Facebook and Twitter circles to likeminded people and only follow certain subreddits, blogs, and news sites, creating an echo chamber of sorts. In a sea of biased content, Wikipedia is one of the few online outlets that strives for neutrality. After 15 years in operation, it’s starting to see results

Researchers at Harvard Business School evaluated almost 4,000 articles in Wikipedia’s online database against the same entries in Encyclopedia Brittanica to compare their biases. They focused on English-language articles about US politics, especially controversial topics, that appeared in both outlets in 2012.

“That is just not a recipe for coming to a conclusion,” Shane Greenstein, one of the study’s authors, said in an interview. “We were surprised that Wikipedia had not failed, had not fallen apart in the last several years.”

Greenstein and his co-author Feng Zhu categorized each article as “blue” or “red.” Drawing from research in political science, they identified terms that are idiosyncratic to each party. For instance, political scientists have identified that Democrats were more likely to use phrases such as “war in Iraq,” “civil rights,” and “trade deficit,” while Republicans used phrases such as “economic growth,” “illegal immigration,” and “border security.”…

“In comparison to expert-based knowledge, collective intelligence does not aggravate the bias of online content when articles are substantially revised,” the authors wrote in the paper. “This is consistent with a best-case scenario in which contributors with different ideologies appear to engage in fruitful online conversations with each other, in contrast to findings from offline settings.”

More surprisingly, the authors found that the 2.8 million registered volunteer editors who were reviewing the articles also became less biased over time. “You can ask questions like ‘do editors with red tendencies tend to go to red articles or blue articles?’” Greenstein said. “You find a prevalence of opposites attract, and that was striking.” The researchers even identified the political stance for a number of anonymous editors based on their IP locations, and the trend held steadfast….(More)”

The Risk to Civil Liberties of Fighting Crime With Big Data


 in the New York Times: “…Sharing data, both among the parts of a big police department and between the police and the private sector, “is a force multiplier,” he said.

Companies working with the military and intelligence agencies have long practiced these kinds of techniques, which the companies are bringing to domestic policing, in much the way surplus military gear has beefed upAmerican SWAT teams.

Palantir first built up its business by offering products like maps of social networks of extremist bombers and terrorist money launderers, and figuring out efficient driving routes to avoid improvised explosive devices.

Palantir used similar data-sifting techniques in New Orleans to spot individuals most associated with murders. Law enforcement departments around Salt Lake City used Palantir to allow common access to 40,000 arrest photos, 520,000 case reports and information like highway and airport data — building human maps of suspected criminal networks.

People in the predictive business sometimes compare what they do to controlling the other side’s “OODA loop,” a term first developed by a fighter pilot and military strategist named John Boyd.

OODA stands for “observe, orient, decide, act” and is a means of managing information in battle.

“Whether it’s war or crime, you have to get inside the other side’s decision cycle and control their environment,” said Robert Stasio, a project manager for cyberanalysis at IBM, and a former United States government intelligence official. “Criminals can learn to anticipate what you’re going to do and shift where they’re working, employ more lookouts.”

IBM sells tools that also enable police to become less predictable, for example, by taking different routes into an area identified as a crime hotspot. It has also conducted studies that show changing tastes among online criminals — for example, a move from hacking retailers’ computers to stealing health care data, which can be used to file for federal tax refunds.

But there are worries about what military-type data analysis means for civil liberties, even among the companies that get rich on it.

“It definitely presents challenges to the less sophisticated type of criminal,but it’s creating a lot of what is called ‘Big Brother’s little helpers,’” Mr.Bowman said. For now, he added, much of the data abundance problem is that “most police aren’t very good at this.”…(More)’

Data Ethics – The New Competitive Advantage


Book by Gry Hasselbalch and Pernille Tranberg: “…describes over 50 cases of mainly private companies working with data ethics to varying degrees

Respect for privacy and the right to control one’s own data are becoming key parameters to gain a competitive edge in today’s business world. Companies, organisations and authorities which view data ethics as a social responsibility,giving it the same importance as environmental awareness and respect for human rights,are tomorrow’s winners. Digital trust is paramount to digital growth and prosperity.
This book combines broad trend analyses with case studies to examine companies which use data ethics to varying degrees. The authors make the case that citizens and consumers are no longer just concerned about a lack of control over their data, but they also have begun to act. In addition, they describe alternative business models, advances in technology and a new European data protection regulation, all of which combine to foster a growing market for data-ethical products and services….(More).

Overcoming the Public-Sector Coordination Problem


Ricardo Hausmann at Project Syndicate: “Public-private cooperation or coordination is receiving considerable attention nowadays. A plethora of centers for the study of business and government relations have been created, and researchers have produced a large literature on the design, analysis, and evaluation of public-private partnerships. Even the World Economic Forum has been transformed into “an international organization for public-private cooperation.

Of course, private-private coordination has been the essence of economics for the past 250 years. While Adam Smith started us on the optimistic belief that an invisible hand would take care of most coordination issues, in the intervening period economists discovered all sorts of market failures, informational imperfections, and incentive problems, which have given rise to rules, regulations, and other forms of government and societal intervention. This year’s Nobel Prize in Economic Sciences was granted to Oliver Hart and Bengt Holmström for their contribution to understanding contracts, a fundamental device for private-private coordination.

But much less attention has been devoted to public-public coordination. This is surprising, because anyone who has worked in government knows that coordinating the public and private sectors to address a particular issue, while often complicated, is a cakewalk compared to the problem of herding the cats that constitute the panoply of government agencies.

The reason for this difficulty is the other side of Smith’s invisible hand. In the private sector, the market mechanism provides the elements of a self-organizing system, thanks to three interconnected structures: the price system, the profit motive, and capital markets. In the public sector, this mechanism is either non-existent or significantly different and less efficient.

The price system is a decentralized information system that reveals people’s willingness to buy or sell and the wisdom of buying some inputs in order to produce a certain output at the going market price. The profit motive provides an incentive system to respond to the information that prices contain. And capital markets mobilize resources for activities that are expected to be profitable; those that adequately respond to prices.

By contrast, most public services have no prices, there is not supposed to be a profit motive in their provision, and capital markets are not supposed to choose what to fund: the money funds whatever is in the budget.

…addressing most problems in government involves multiple agencies….

One solution is to create a market-like mechanism within the government. The idea is to assign a portion of the budget, say 3-5%, to a central pool of funds to be requested by one ministry but to be executed by another, as if one was buying services from the other. These resources would allow the demand for public goods to permeate the allocation of budgetary resources across ministries….

The central pool of resources is designed to increase the responsiveness of one ministry’s back end to the demands of society as identified by another ministry’s front end, without these resources competing with the priorities that each ministry has for its “own” budget.

By allocating a small proportion of each year’s budget to priorities identified in this way, we may find that, over time, budgets become more responsive and better reflect society’s evolving needs. And public-private coordination may flourish once the public-public bottlenecks are removed….(More)”

What We Should Mean When We Talk About Citizen Engagement


Eric Gordon in Governing: “…But here’s the problem: The institutional language of engagement has been defined by its measurement. Chief engagement officers in corporations are measuring milliseconds on web pages, and clicks on ads, and not relations among people. This is disproportionately influencing the values of democracy and the responsibility of public institutions to protect them.

Too often, when government talks about engagement, it is talking those things that are measurable, but it is providing mandates to employees imbued with ambiguity. For example, the executive order issued by Mayor Murray in Seattle is a bold directive for the “timely implementation by all City departments of equitable outreach and engagement practices that reaffirm the City’s commitment to inclusive participation.”

This extraordinary mayoral mandate reflects clear democratic values, but it lacks clarity of methods. It reflects a need to use digital technology to enhance process, but it doesn’t explain why. This in no way is meant as a criticism of Seattle’s effort; rather, it is simply meant to illustrate the complexity of engagement in practice. Departments are rewarded for quantifiable efficiency, not relationships. Just because something is called engagement, this fundamental truth won’t change.

Government needs to be much more clear about what it really means when it talks about engagement. In 2015, Living Cities and the Citi Foundation launched the City Accelerator on Public Engagement, which was an effort to source and support effective practices of public engagement in city government. This 18-month project, based on a cohort of five cities throughout the United States, is just now coming to an end. Out of it came several lasting insights, one of which I will share here. City governments are institutions in transition that need to ask why people should care.

After the election, who is going to care about government? How do you get people to care about the services that government provides? How do you get people to care about the health outcomes in their neighborhoods? How do you get people to care about ensuring accessible, high-quality public education?

I want to propose that when government talks about civic engagement, it is really talking about caring. When you care about something, you make a decision to be attentive to that thing. But “caring about” is one end of what I’ll call a spectrum of caring. On the other end, there is “caring for,” when, as described by philosopher Nel Noddings, “what we do depends not upon rules, or at least not wholly on rules — not upon a prior determination of what is fair or equitable — but upon a constellation of conditions that is viewed through both the eyes of the one-caring and the eyes of the cared-for.”

In short, caring-for is relational. When one cares for another, the outcomes of an encounter are not predetermined, but arise through relation….(More)”.

Why citizen input is crucial to the government design process


Mark Forman in NextGov: “…Whether agencies are implementing an application or enterprisewide solution, end-user input (from both citizens and government workers) is a requirement for success. In fact, the only path to success in digital government is the “moment of truth,” the point of interaction when a government delivers a service or solves a problem for its citizens.

A recent example illustrates this challenge. A national government recently deployed a new application that enables citizens to submit questions to agency offices using their mobile devices. The mobile application, while functional and working to specifications, failed to address the core issue: Most citizens prefer asking questions via email, an option that was terminated when the new app was deployed.

Digital technologies offer government agencies numerous opportunities to cut costs and improve citizen services. But in the rush to implement new capabilities, IT professionals often neglect to consider fully their users’ preferences, knowledge, limitations and goals.

When developing new ways to deliver services, designers must expand their focus beyond the agency’s own operating interests to ensure they also create a satisfying experience for citizens. If not, the applications will likely be underutilized or even ignored, thus undermining the anticipated cost-savings and performance gains that set the project in motion.

Government executives also must recognize merely relying on user input creates a risk of “paving the cowpath”: innovations cannot significantly improve the customer experience if users do not recognize the value of new technologies in simplifying, making more worthwhile, or eliminating a task.

Many digital government playbooks and guidance direct IT organizations to create a satisfying citizen experience by incorporating user-centered design methodology into their projects. UCD is a process for ensuring a new solution or tool is designed from the perspective of users. Rather than forcing government workers or the public to adapt to the new solution, UCD helps create a solution tailored to their abilities, preferences and needs….effective UCD is built upon four primary principles or guidelines:

  • Focus on the moment of truth. A new application or service must actually be something that citizens want and need via the channel used, and not just easy to use.
  • Optimize outcomes, not just processes. True transformation occurs when citizens’ expectations and needs remain the constant center of focus. Merely overlaying new technology on business as usual may provide a prettier interface, but success requires a clear benefit for the public at the moment of truth in the interaction with government.
  • Evolve processes over time to help citizens adapt to new applications. In most instances, citizens will make a smoother transition to new services when processes are changed gradually to be more intuitive rather than with an abrupt, flip-of-the-switch approach.
  • Combine UCD with robust DevOps. Agencies need a strong DevOps process to incorporate what they learn about citizens’ preferences and needs as they develop, test and deploy new citizen services….(More)”

Big Data Is Not a Monolith


Book edited by Cassidy R. Sugimoto, Hamid R. Ekbia and Michael Mattioli: “Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies.

The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data’s ramifications. The contributors look at big data’s effect on individuals as it exerts social control through monitoring, mining, and manipulation; big data and society, examining both its empowering and its constraining effects; big data and science, considering issues of data governance, provenance, reuse, and trust; and big data and organizations, discussing data responsibility, “data harm,” and decision making….(More)”

Crowdsourcing campaign rectifies translation errors


Springwise: “A few months ago, Seoul City launched a month long campaign during September and October asking people to help correct poorly translated street signs. For examples, the sign pictured below has incorrectly abbreviated “Bridge,” which should be corrected to “Brg.” Those who find mistakes can submit them via email, including a picture of the sign and location details. The initiative is targeting signs in English, Chinese and Japanese in public places such as subway stations, bus stops and tourist information sites. Seoul city is offering prizes to those who successfully spot mistakes. Top spotters receive a rewards of KRW 200,000 (around USD 180).

450bridgeerror

The scheme comes as part of a drive to improve the experience of tourists travelling to the South Korean capital. According to a Seoul city official, “Multilingual signs are important standards to assess a country’s competitiveness in the tourism business. We want to make sure that foreigners in Seoul suffer no inconvenience.”…(More)”

Ten Actions to Implement Big Data Initiatives: A Study of 65 Cities


Ten Actions to Implement Big Data Initiatives: A Study of 65 Cities
IBM Center for the Business of Government: “Professor Ho conducted a survey and phone interviews with city officials responsible for Big Data initiatives. Based on his research, the report presents a framework for Big Data initiatives which consists of two major cycles: the data cycle and the decision-making cycle. Each cycle is described in the report.

The trend toward Big Data initiatives is likely to accelerate in future years. In anticipation of the increased use of Big Data, Professor Ho identified factors that are likely to influence its adoption by local governments. He identified three organizational factors that influence adoption: leadership attention, adequate staff capacity, and pursuit of partners. In addition, he identified four organizational strategies that influence adoption: governance structures, team approach, incremental initiatives, and Big Data policies.

Based on his research findings, Professor Ho sets forth 10 recommendations for those responsible for implementing cities’ Big Data initiatives—five recommendations are directed to city leaders and five to city executives. A key recommendation is that city leaders should think about a “smart city system,” not just data. Another key recommendation is that city executives should develop a multi-year strategic data plan to enhance the effectiveness of Big Data initiatives….(More)”

The power of prediction markets


Adam Mann in Nature: “It was a great way to mix science with gambling, says Anna Dreber. The year was 2012, and an international group of psychologists had just launched the ‘Reproducibility Project’ — an effort to repeat dozens of psychology experiments to see which held up1. “So we thought it would be fantastic to bet on the outcome,” says Dreber, who leads a team of behavioural economists at the Stockholm School of Economics.

In particular, her team wanted to see whether scientists could make good use of prediction markets: mini Wall Streets in which participants buy and sell ‘shares’ in a future event at a price that reflects their collective wisdom about the chance of the event happening. As a control, Dreber and her colleagues first asked a group of psychologists to estimate the odds of replication for each study on the project’s list. Then the researchers set up a prediction market for each study, and gave the same psychologists US$100 apiece to invest.

When the Reproducibility Project revealed last year that it had been able to replicate fewer than half of the studies examined2, Dreber found that her experts hadn’t done much better than chance with their individual predictions. But working collectively through the markets, they had correctly guessed the outcome 71% of the time3.

Experiments such as this are a testament to the power of prediction markets to turn individuals’ guesses into forecasts of sometimes startling accuracy. That uncanny ability ensures that during every US presidential election, voters avidly follow the standings for their favoured candidates on exchanges such as Betfair and the Iowa Electronic Markets (IEM). But prediction markets are increasingly being used to make forecasts of all kinds, on everything from the outcomes of sporting events to the results of business decisions. Advocates maintain that they allow people to aggregate information without the biases that plague traditional forecasting methods, such as polls or expert analysis….

Prediction markets have also had some high-profile misfires, however — such as giving the odds of a Brexit ‘stay’ vote as 85% on the day of the referendum, 23 June. (UK citizens in fact narrowly voted to leave the European Union.) And prediction markets lagged well behind conventional polls in predicting that Donald Trump would become the 2016 Republican nominee for US president.

Such examples have inspired academics to probe prediction markets. Why do they work as well as they do? What are their limits, and why do their predictions sometimes fail?…(More)”