Wikipedia vandalism could thwart hoax-busting on Google, YouTube and Facebook


Daniel Funke at Poynter: “For a brief moment, the California Republican Party supported Nazism. At least, that’s what Google said.

That’s because someone vandalized the Wikipedia page for the party on May 31 to list “Nazism” alongside ideologies like “Conservatism,” “Market liberalism” and “Fiscal conservatism.” The mistake was removed from search results, with Google clarifying to Vice News that the search engine had failed to catch the vandalism in the Wikipedia entry….

Google has long drawn upon the online encyclopedia for appending basic information to search results. According to the edit log for the California GOP page, someone added “Nazism” to the party’s ideology section around 7:40 UTC on May 31. The edit was removed within a minute, but it appears Google’s algorithm scraped the page just in time for the fake.

“Sometimes people vandalize public information sources, like Wikipedia, which can impact the information that appears in search,” a Google spokesperson told Poynter in an email. “We have systems in place that catch vandalism before it impacts search results, but occasionally errors get through, and that’s what happened here.”…

According to Google, more than 99.9 percent of Wikipedia edits that show up in Knowledge Panels, which display basic information about searchable keywords at the top of results, aren’t vandalism. The user who authored the original edit to the California GOP’s page did not use a user profile, making them hard to track down.

That’s a common tactic among people who vandalize Wikipedia pages, a practice the nonprofit has documented extensively. But given the volume of edits that are made on Wikipedia — about 10 per second, with 600 new pages per day — and the fact that Facebook and YouTube are now pulling from them to provide more context to posts, the potential for and effect of abuse is high….(More)”.

Research Shows Political Acumen, Not Just Analytical Skills, is Key to Evidence-Informed Policymaking


Press Release: “Results for Development (R4D) has released a new study unpacking how evidence translators play a key and somewhat surprising role in ensuring policymakers have the evidence they need to make informed decisions. Translators — who can be evidence producers, policymakers, or intermediaries such as journalists, advocates and expert advisors — identify, filter, interpret, adapt, contextualize and communicate data and evidence for the purposes of policymaking.

The study, Translators’ Role in Evidence-Informed Policymaking, provides a better understanding of who translators are and how different factors influence translators’ ability to promote the use of evidence in policymaking. This research shows translation is an essential function and that, absent individuals or organizations taking up the translator role, evidence translation and evidence-informed policymaking often do not take place.

“We began this research assuming that translators’ technical skills and analytical prowess would prove to be among the most important factors in predicting when and how evidence made its way into public sector decision making,” Nathaniel Heller, executive vice president for integrated strategies at Results for Development, said. “Surprisingly, that turned out not to be the case, and other ‘soft’ skills play a far larger role in translators’ efficacy than we had imagined.”

Key findings include:

  • Translator credibility and reputation are crucial to the ability to gain access to policymakers and to promote the uptake of evidence.
  • Political savvy and stakeholder engagement are among the most critical skills for effective translators.
  • Conversely, analytical skills and the ability to adapt, transform and communicate evidence were identified as being less important stand-alone translator skills.
  • Evidence translation is most effective when initiated by those in power or when translators place those in power at the center of their efforts.

The study includes a definitional and theoretical framework as well as a set of research questions about key enabling and constraining factors that might affect evidence translators’ influence. It also focuses on two cases in Ghana and Argentina to validate and debunk some of the intellectual frameworks around policy translators that R4D and others in the field have already developed. The first case focuses on Ghana’s blue-ribbon commission formed by the country’s president in 2015, which was tasked with reviewing Ghana’s national health insurance scheme. The second case looks at Buenos Aires’ 2016 government-led review of the city’s right to information regime….(More)”.

Ontario is trying a wild experiment: Opening access to its residents’ health data


Dave Gershorn at Quartz: “The world’s most powerful technology companies have a vision for the future of healthcare. You’ll still go to your doctor’s office, sit in a waiting room, and explain your problem to someone in a white coat. But instead of relying solely on their own experience and knowledge, your doctor will consult an algorithm that’s been trained on the symptoms, diagnoses, and outcomes of millions of other patients. Instead of a radiologist reading your x-ray, a computer will be able to detect minute differences and instantly identify a tumor or lesion. Or at least that’s the goal.

AI systems like these, currently under development by companies including Google and IBM, can’t read textbooks and journals, attend lectures, and do rounds—they need millions of real life examples to understand all the different variations between one patient and another. In general, AI is only as good as the data it’s trained on, but medical data is exceedingly private—most developed countries have strict health data protection laws, such as HIPAA in the United States….

These approaches, which favor companies with considerable resources, are pretty much the only way to get large troves of health data in the US because the American health system is so disparate. Healthcare providers keep personal files on each of their patients, and can only transmit them to other accredited healthcare workers at the patient’s request. There’s no single place where all health data exists. It’s more secure, but less efficient for analysis and research.

Ontario, Canada, might have a solution, thanks to its single-payer healthcare system. All of Ontario’s health data exists in a few enormous caches under government control. (After all, the government needs to keep track of all the bills its paying.) Similar structures exist elsewhere in Canada, such as Quebec, but Toronto, which has become a major hub for AI research, wants to lead the charge in providing this data to businesses.

Until now, the only people allowed to study this data were government organizations or researchers who partnered with the government to study disease. But Ontario has now entrusted the MaRS Discovery District—a cross between a tech incubator and WeWork—to build a platform for approved companies and researchers to access this data, dubbed Project Spark. The project, initiated by MaRS and Canada’s University Health Network, began exploring how to share this data after both organizations expressed interest to the government about giving broader health data access to researchers and companies looking to build healthcare-related tools.

Project Spark’s goal is to create an API, or a way for developers to request information from the government’s data cache. This could be used to create an app for doctors to access the full medical history of a new patient. Ontarians could access their health records at any time through similar software, and catalog health issues as they occur. Or researchers, like the ones trying to build AI to assist doctors, could request a different level of access that provides anonymized data on Ontarians who meet certain criteria. If you wanted to study every Ontarian who had Alzheimer’s disease over the last 40 years, that data would only be authorization and a few lines of code away.

There are currently 100 companies lined up to get access to data, comprised of health records from Ontario’s 14 million residents. (MaRS won’t say who the companies are). …(More)”

Data Stewards: Data Leadership to Address 21st Century Challenges


Post by Stefaan Verhulst: “…Over the last two years, we have focused on the opportunities (and challenges) surrounding what we call “data collaboratives.” Data collaboratives are an emerging form of public-private partnership, in which information held by companies (or other entities) is shared with the public sector, civil society groups, research institutes and international organizations. …

For all its promise, the practice of data collaboratives remains ad hoc and limited. In part, this is a result of the lack of a well-defined, professionalized concept of data stewardship within corporations that has a mandate to explore ways to harness the potential of their data towards positive public ends.

Today, each attempt to establish a cross-sector partnership built on the analysis of private-sector data requires significant and time-consuming efforts, and businesses rarely have personnel tasked with undertaking such efforts and making relevant decisions.

As a consequence, the process of establishing data collaboratives and leveraging privately held data for evidence-based policy making and service delivery is onerous, generally one-off, not informed by best practices or any shared knowledge base, and prone to dissolution when the champions involved move on to other functions.

By establishing data stewardship as a corporate function, recognized and trusted within corporations as a valued responsibility, and by creating the methods and tools needed for responsible data-sharing, the practice of data collaboratives can become regularized, predictable, and de-risked….

To take stock of current practice and scope needs and opportunities we held a small yet in-depth kick-off event at the offices of the Cloudera Foundation in San Francisco on May 8th 2018 that was attended by representatives from Linkedin, Facebook, Uber, Mastercard, DigitalGlobe, Cognizant, Streetlight Data, the World Economic Forum, and Nethope — among others.

Four Key Take Aways

The discussions were varied and wide-ranging.

Several reflected on the risks involved — including the risks of NOT sharing or collaborating on privately held data that could improve people’s lives (and in some occasions save lives).

Others warned that the window of opportunity to increase the practice of data collaboratives may be closing — given new regulatory requirements and other barriers that may disincentivize corporations from engaging with third parties around their data.

Ultimately four key take aways emerged. These areas — at the nexus of opportunities and challenges — are worth considering further, because they help us better understand both the potential and limitations of data collaboratives….(More)”

Latin America is fighting corruption by opening up government data


Anoush Darabi in apolitical: “Hardly a country in Latin America has been untouched by corruption scandals; this was just one of the more bizarre episodes. In response, using a variety of open online platforms, both city and national governments are working to lift the lid on government activity, finding new ways to tackle corruption with technology….

In Buenos Aires, government is dealing with the problem by making the details of all its public works projects completely transparent. With BA Obras, an online platform, the city maps projects across the city, and lists detailed information on their cost, progress towards completion and the names of the contractors.

“We allocate an enormous amount of money,” said Alvaro Herrero, Under Secretary for Strategic Management and Institutional Quality for the government of Buenos Aires, who helped to build the tool. “We need to be accountable to citizens in terms of what are we doing with that money.”

The portal is designed to be accessible to the average user. Citizens can filter the map to focus on their neighbourhood, revealing information on existing projects with the click of a mouse.

“A journalist called our communications team a couple of weeks ago,” said Herrero. “He said: ‘I want all the information on all the infrastructure projects that the government has, and I want the documentation.’ Our guy’s answer was, ‘OK, I will send you all the information in ten seconds.’ All he had to do was send a link to the platform.”

Since launching in October 2017 with 80 public works projects, the platform now features over 850. It has had 75,000 unique views, the majority coming in the month after launching.

Making people aware and encouraging them to use it is key. “The main challenge is not the platform itself, but getting residents to use it,” said Herrero. “We’re still in that process.”

Brazil’s public spending checkers

Brazil is using big data analysis to scrutinise its spending via its Public Expenditure Observatory (ODP).

The ODP was founded in 2008 to help monitor spending across government departments systematically. In such a large country, spending data is difficult to pull together, and its volume makes it difficult to analyse. The ODP pulls together disparate information from government databases across the country into a central location, puts it into a consistent format and analyses it for inconsistency. Alongside analysis, the ODP also makes the data public.

For example, in 2010 the ODP analysed expenses made on credit cards by federal government officers. They discovered that 11% of all transactions that year were suspicious, requiring further investigation. After the data was published, credit card expenditure dropped by 25%….(More)”.

Activating Agency or Nudging?


Article by Michael Walton: “Two ideas in development – activating agency of citizens and using “nudges” to change their behavior – seem diametrically opposed in spirit: activating latent agency at the ground level versus  top-down designs that exploit people’s behavioral responses. Yet both start from a psychological focus and a belief that changes in people’s behavior can lead to “better” outcomes, for the individuals involved and for society.  So how should we think of these contrasting sets of ideas? When should each approach be used?…

Let’s compare the two approaches with respect to diagnostic frame, practice and ethics.

Diagnostic frame.  

The common ground is recognition that people use short-cuts for decision-making, in ways that can hurt their own interests.  In both approaches, there is an emphasis that decision-making is particularly tough for poor people, given the sheer weight of daily problem-solving.  In behavioral economics one core idea is that we have limited mental “bandwidth” and this form of scarcity hampers decision-making. However, in the “agency” tradition, there is much more emphasis on unearthing and working with the origins of the prevailing mental models, with respect to social exclusion, stigmatization, and the typically unequal economic and cultural relations with respect to more powerful groups in a society.  One approach works more with symptoms, the other with root causes.

Implications for practice.  

The two approaches on display in Cerrito both concern social gains, and both involve a role for an external actor.  But here the contrast is sharp. In the “nudge” approach the external actor is a beneficent technocrat, trying out alternative offers to poor (or non-poor) people to improve outcomes.  A vivid example is alternative messages to tax payers in Guatemala, that induce varying improvements in tax payments.  In the “agency” approach the essence of the interaction is between a front-line worker and an individual or family, with a co-created diagnosis and plan, designed around goals and specific actions that the poor person chooses.  This is akin to what anthropologist Arjun Appadurai termed increasing the “capacity to aspire,” and can extend to greater engagement in civic and political life.

Ethics.

In both approaches, ethics is central.  As implicated in the “nudging for social good as opposed to electoral gain,” some form of ethical regulation is surely needed. In “action to activate agency,” the central ethical issue is of maintaining equality in design between activist and citizen, and explicit owning of any decisions.

What does this imply?

To some degree this is a question of domain of action.  Nudging is most appropriate in a program for which there is a fully supported political and social program, and the issue is how to make it work (as in paying taxes).  The agency approach has a broader ambition, but starts from domains that are potentially within an individual’s control once the sources of “ineffective” or inhibited behavior are tackled, including via front-line interactions with public or private actors….(More)”.

Do Delivery Units Deliver?: Assessing Government Innovations


Technical note by Lafuente, Mariano and González, Sebastián prepared as part of the Inter-American Development Bank’s (IDB) agenda on Center of Government: “… analyzes how delivery units (DU) have been adapted by Latin American and Caribbean governments, the degree to which they have contributed to meeting governments’ priority goals between 2007 and 2018, and the lessons learned along the way. The analysis, which draws lessons from 14 governments in the region, shows that the implementation of the DU model has varied as it has been tailored to each country’s context and that, under certain preconditions, has contributed to: (i) improved management using specific tools in contexts where institutional development is low; and (ii) attaining results that have a direct impact on citizens. The objective of this document is to serve as a guide for governments interested in applying similar management models as well as to set out an agenda for the future of DU in the region….(More)“.

Technology and satellite companies open up a world of data


Gabriel Popkin at Nature: “In the past few years, technology and satellite companies’ offerings to scientists have increased dramatically. Thousands of researchers now use high-resolution data from commercial satellites for their work. Thousands more use cloud-computing resources provided by big Internet companies to crunch data sets that would overwhelm most university computing clusters. Researchers use the new capabilities to track and visualize forest and coral-reef loss; monitor farm crops to boost yields; and predict glacier melt and disease outbreaks. Often, they are analysing much larger areas than has ever been possible — sometimes even encompassing the entire globe. Such studies are landing in leading journals and grabbing media attention.

Commercial data and cloud computing are not panaceas for all research questions. NASA and the European Space Agency carefully calibrate the spectral quality of their imagers and test them with particular types of scientific analysis in mind, whereas the aim of many commercial satellites is to take good-quality, high-resolution pictures for governments and private customers. And no company can compete with Landsat’s free, publicly available, 46-year archive of images of Earth’s surface. For commercial data, scientists must often request images of specific regions taken at specific times, and agree not to publish raw data. Some companies reserve cloud-computing assets for researchers with aligned interests such as artificial intelligence or geospatial-data analysis. And although companies publicly make some funding and other resources available for scientists, getting access to commercial data and resources often requires personal connections. Still, by choosing the right data sources and partners, scientists can explore new approaches to research problems.

Mapping poverty

Joshua Blumenstock, an information scientist at the University of California, Berkeley (UCB), is always on the hunt for data he can use to map wealth and poverty, especially in countries that do not conduct regular censuses. “If you’re trying to design policy or do anything to improve living conditions, you generally need data to figure out where to go, to figure out who to help, even to figure out if the things you’re doing are making a difference.”

In a 2015 study, he used records from mobile-phone companies to map Rwanda’s wealth distribution (J. Blumenstock et al. Science 350, 1073–1076; 2015). But to track wealth distribution worldwide, patching together data-sharing agreements with hundreds of these companies would have been impractical. Another potential information source — high-resolution commercial satellite imagery — could have cost him upwards of US$10,000 for data from just one country….

Use of commercial images can also be restricted. Scientists are free to share or publish most government data or data they have collected themselves. But they are typically limited to publishing only the results of studies of commercial data, and at most a limited number of illustrative images.

Many researchers are moving towards a hybrid approach, combining public and commercial data, and running analyses locally or in the cloud, depending on need. Weiss still uses his tried-and-tested ArcGIS software from Esri for studies of small regions, and jumps to Earth Engine for global analyses.

The new offerings herald a shift from an era when scientists had to spend much of their time gathering and preparing data to one in which they’re thinking about how to use them. “Data isn’t an issue any more,” says Roy. “The next generation is going to be about what kinds of questions are we going to be able to ask?”…(More)”.

Democracy doomsday prophets are missing this critical shift


Bruno Kaufmann and Joe Mathews in the Washington Post: “The new conventional wisdom seems to be that electoral democracy is in decline. But this ignores another widespread trend: direct democracy at the local and regional level is booming, even as disillusion with representative government at the national level grows.

Today, 113 of the world’s 117 democratic countries offer their citizens legally or constitutionally established rights to bring forward a citizens’ initiative, referendum or both. And since 1980, roughly 80 percent of countries worldwide have had at least one nationwide referendum or popular vote on a legislative or constitutional issue.

Of all the nationwide popular votes in the history of the world, more than half have taken place in the past 30 years. As of May 2018, almost 2,000 nationwide popular votes on substantive issues have taken place, with 1,059 in Europe, 191 in Africa, 189 in Asia, 181 in the Americas and 115 in Oceania, based on our research.

That is just at the national level. Other major democracies — Germany, the United States and India — do not permit popular votes on substantive issues nationally but support robust direct democracy at the local and regional levels. The number of local votes on issues has so far defied all attempts to count them — they run into the tens of thousands.

This robust democratization, at least when it comes to direct legislation, provides a context that’s generally missing when doomsday prophets suggest that democracy is dying by pointing to authoritarian-leaning leaders like Turkish President Recep Tayyip Erdogan, Russian President Vladimir Putin, Hungarian Prime Minister Viktor Orbán, Philippine President Rodrigo Duterte and U.S. President Donald Trump.

Indeed, the two trends — the rise of populist authoritarianism in some nations and the rise of local and direct democracy in some areas — are related. Frustration is growing with democratic systems at national levels, and yes, some people become more attracted to populism. But some of that frustration is channeled into positive energy — into making local democracy more democratic and direct.

Cities from Seoul to San Francisco are hungry for new and innovative tools that bring citizens into processes of deliberation that allow the people themselves to make decisions and feel invested in government actions. We’ve seen local governments embrace participatory budgeting, participatory planning, citizens’ juries and a host of experimental digital tools in service of that desired mix of greater public deliberation and more direct public action….(More).”

How Citizens Can Hack EU Democracy


Stephen Boucher at Carnegie Europe: “…To connect citizens with the EU’s decisionmaking center, European politicians will need to provide ways to effectively hack this complex system. These democratic hacks need to be visible and accessible, easily and immediately implementable, viable without requiring changes to existing European treaties, and capable of having a traceable impact on policy. Many such devices could be imagined around these principles. Here are three ideas to spur debate.

Hack 1: A Citizens’ Committee for the Future in the European Parliament

The European Parliament has proposed that twenty-seven of the seventy-three seats left vacant by Brexit should be redistributed among the remaining member states. According to one concept, the other forty-six unassigned seats could be used to recruit a contingent of ordinary citizens from around the EU to examine legislation from the long-term perspective of future generations. Such a “Committee for the Future” could be given the power to draft a response to a yearly report on the future produced by the president of the European Parliament, initiate debates on important political themes of their own choosing, make submissions on future-related issues to other committees, and be consulted by members of the European Parliament (MEPs) on longer-term matters.

MEPs could decide to use these forty-six vacant seats to invite this Committee for the Future to sit, at least on a trial basis, with yearly evaluations. This arrangement would have real benefits for EU politics, acting as an antidote to the union’s existential angst and helping the EU think systemically and for the longer term on matters such as artificial intelligence, biodiversity, climate concerns, demography, mobility, and energy.

Hack 2: An EU Participatory Budget

In 1989, the city of Porto Alegre, Brazil, decided to cede control of a share of its annual budget for citizens to decide upon. This practice, known as participatory budgets, has since spread globally. As of 2015, over 1,500 instances of participatory budgets have been implemented across five continents. These processes generally have had a positive impact, with people proving that they take public spending matters seriously.

To replicate these experiences at the European level, the complex realities of EU budgeting would require specific features. First, participative spending probably would need to be both local and related to wider EU priorities in order to ensure that citizens see its relevance and its wider European implications. Second, significant resources would need to be allocated to help citizens come up with and promote projects. For instance, the city of Paris has ensured that each suggested project that meets the eligibility requirements has a desk officer within its administration to liaise with the idea’s promoters. It dedicates significant resources to reach out to citizens, in particular in the poorer neighborhoods of Paris, both online and face-to-face. Similar efforts would need to be deployed across Europe. And third, in order to overcome institutional complexities, the European Parliament would need to work with citizens as part of its role in negotiating the budget with the European Council.

Hack 3: An EU Collective Intelligence Forum

Many ideas have been put forward to address popular dissatisfaction with representative democracy by developing new forums such as policy labs, consensus conferences, and stakeholder facilitation groups. Yet many citizens still feel disenchanted with representative democracy, including at the EU level, where they also strongly distrust lobby groups. They need to be involved more purposefully in policy discussions.

A yearly Deliberative Poll could be run on a matter of significance, ahead of key EU summits and possibly around the president of the commission’s State of the Union address. On the model of the first EU-wide Deliberative Poll, Tomorrow’s Europe, this event would bring together in Brussels a random sample of citizens from all twenty-seven EU member states, and enable them to discuss various social, economic, and foreign policy issues affecting the EU and its member states. This concept would have a number of advantages in terms of promoting democratic participation in EU affairs. By inviting a truly representative sample of citizens to deliberate on complex EU matters over a weekend, within the premises of the European Parliament, the European Parliament would be the focus of a high-profile event that would draw media attention. This would be especially beneficial if—unlike Tomorrow’s Europe—the poll was not held at arm’s length by EU policymakers, but with high-level national officials attending to witness good-quality deliberation remolding citizens’ views….(More)”.