Political Corruption in a World in Transition


Book edited by Jonathan Mendilow and Éric Phélippeau: “This book argues that the mainstream definitions of corruption, and the key expectations they embed concerning the relationship between corruption, democracy, and the process of democratization, require reexamination. Even critics who did not consider stable institutions and legal clarity of veteran democracies as a cure-all, assumed that the process of widening the influence on government decision making and implementation allows non-elites to defend their interests, define the acceptable sources and uses of wealth, and demand government accountability. This had proved correct, especially insofar as ‘petty corruption’ is involved. But the assumption that corruption necessarily involves the evasion of democratic principles and a ‘market approach’ in which the corrupt seek to maximize profit does not exhaust the possible incentives for corruption, the types of behaviors involved (for obvious reasons, the tendency in the literature is to focus on bribery), or the range of situations that ‘permit’ corruption in democracies. In the effort to identify some of the problems that require recognition, and to offer a more exhaustive alternative, the chapters in this book focus on corruption in democratic settings (including NGOs and the United Nations which were largely so far ignored), while focusing mainly on behaviors other than bribery….(More)”.

The Age of Digital Interdependence


Report of the High-level Panel on Digital Cooperation: “The immense power and value of data in the modern economy can and must be harnessed to meet the SDGs, but this will require new models of collaboration. The Panel discussed potential pooling of data in areas such as health, agriculture and the environment to enable scientists and thought leaders to use data and artificial intelligence to better understand issues and find new ways to make progress on the SDGs. Such data commons would require criteria for establishing relevance to the SDGs, standards for interoperability, rules on access and safeguards to ensure privacy and security.

Anonymised data – information that is rendered anonymous in such a way that the data subject is not or no longer identifiable – about progress toward the SDGs is generally less sensitive and controversial than the use of personal data of the kind companies such as Facebook, Twitter or Google may collect to drive their business models, or facial and gait data that could be used for surveillance. However, personal data can also serve development goals, if handled with proper oversight to ensure its security and privacy.

For example, individual health data is extremely sensitive – but many people’s health data, taken together, can allow researchers to map disease outbreaks, compare the effectiveness of treatments and improve understanding of conditions. Aggregated data from individual patient cases was crucial to containing the Ebola outbreak in West Africa. Private and public sector healthcare providers around the world are now using various forms of electronic medical records. These help individual patients by making it easier to personalise health services, but the public health benefits require these records to be interoperable.

There is scope to launch collaborative projects to test the interoperability of data, standards and safeguards across the globe. The World Health Assembly’s consideration of a global strategy for digital health in 2020 presents an opportunity to launch such projects, which could initially be aimed at global health challenges such as Alzheimer’s and hypertension.

Improved digital cooperation on a data-driven approach to public health has the potential to lower costs, build new partnerships among hospitals, technology companies, insurance providers and research institutes and support the shift from treating diseases to improving wellness. Appropriate safeguards are needed to ensure the focus remains on improving health care outcomes. With testing, experience and necessary protective measures as well as guidelines for the responsible use of data, similar cooperation could emerge in many other fields related to the SDGs, from education to urban planning to agriculture…(More)”.

100 Radical Innovation Breakthroughs for the future


The Radical Innovation Breakthrough Inquirer for the European Commission: “This report provides insights on 100 emerging developments that may exert a strong impact on global value creation and offer important solutions to societal needs. We identified this set of emerging developments through a carefully designed procedure that combined machine learning algorithms and human evaluation. After successive waves of selection and refinement, the resulting 100 emerging topics were subjected to several assessment procedures, including expert consultation and analysis of related patents and publications.

Having analysed the potential importance of each of these innovations for Europe, their current maturity and the relative strength of Europe in related R&D, we can make some general policy recommendations that follow.

However, it is important to note that our recommendations are based on the extremes of the distributions, and thus not all RIBs are named under the recommendations. Yet, the totality of the set of Radical Innovation Breakthrough (RIBs) and Radical Societal Breakthrough (RSBs) descriptions and their recent progress directions constitute an important collection of intelligence material that can inform strategic planning in research an innovation policy, industry and enterprise policy, and local development policy….(More)”.

Bringing Truth to the Internet


Article by Karen Kornbluh and Ellen P. Goodman: “The first volume of Special Counsel Robert Mueller’s report notes that “sweeping” and “systemic” social media disinformation was a key element of Russian interference in the 2016 election. No sooner were Mueller’s findings public than Twitter suspended a host of bots who had been promoting a “Russiagate hoax.”

Since at least 2016, conspiracy theories like Pizzagate and QAnon have flourished online and bled into mainstream debate. Earlier this year, a British member of Parliament called social media companies “accessories to radicalization” for their role in hosting and amplifying radical hate groups after the New Zealand mosque shooter cited and attempted to fuel more of these groups. In Myanmar, anti-Rohingya forces used Facebook to spread rumors that spurred ethnic cleansing, according to a UN special rapporteur. These platforms are vulnerable to those who aim to prey on intolerance, peer pressure, and social disaffection. Our democracies are being compromised. They work only if the information ecosystem has integrity—if it privileges truth and channels difference into nonviolent discourse. But the ecosystem is increasingly polluted.

Around the world, a growing sense of urgency about the need to address online radicalization is leading countries to embrace ever more draconian solutions: After the Easter bombings in Sri Lanka, the government shut down access to Facebook, WhatsApp, and other social media platforms. And a number of countries are considering adopting laws requiring social media companies to remove unlawful hate speech or face hefty penalties. According to Freedom House, “In the past year, at least 17 countries approved or proposed laws that would restrict online media in the name of fighting ‘fake news’ and online manipulation.”

The flaw with these censorious remedies is this: They focus on the content that the user sees—hate speech, violent videos, conspiracy theories—and not on the structural characteristics of social media design that create vulnerabilities. Content moderation requirements that cannot scale are not only doomed to be ineffective exercises in whack-a-mole, but they also create free expression concerns, by turning either governments or platforms into arbiters of acceptable speech. In some countries, such as Saudi Arabia, content moderation has become justification for shutting down dissident speech.

When countries pressure platforms to root out vaguely defined harmful content and disregard the design vulnerabilities that promote that content’s amplification, they are treating a symptom and ignoring the disease. The question isn’t “How do we moderate?” Instead, it is “How do we promote design change that optimizes for citizen control, transparency, and privacy online?”—exactly the values that the early Internet promised to embody….(More)”.

From Planning to Prototypes: New Ways of Seeing Like a State


Fleur Johns at Modern Law Review: “All states have pursued what James C. Scott characterised as modernist projects of legibility and simplification: maps, censuses, national economic plans and related legislative programs. Many, including Scott, have pointed out blindspots embedded in these tools. As such criticism persists, however, the synoptic style of law and development has changed. Governments, NGOs and international agencies now aspire to draw upon immense repositories of digital data. Modes of analysis too have changed. No longer is legibility a precondition for action. Law‐ and policy‐making are being informed by business development methods that prefer prototypes over plans. States and international institutions continue to plan, but also seek insight from the release of minimally viable policy mock‐ups. Familiar critiques of law and development work, and arguments for its reform, have limited purchase on these practices, Scott’s included. Effective critical intervention in this field today requires careful attention to be paid to these emergent patterns of practice…(More)”.

Introducing ‘AI Commons’: A framework for collaboration to achieve global impact


Press Release: “Last week’s 3rd annual AI for Good Global Summit once again showcased the growing number of Artificial Intelligence (AI) projects with promise to advance the United Nations Sustainable Development Goals (SDGs).

Now, using the Summit’s momentum, AI innovators and humanitarian leaders are prepared to take the ‘AI for Good’ movement to the next level.

They are working together to launch an ‘AI Commons’ that aims to scale AI for Good projects and maximize their impact across the world.

The AI Commons will enable AI adopters to connect with AI specialists and data owners to align incentives for innovation and develop AI solutions to precisely defined problems.

“The concept of AI Commons has developed over three editions of the Summit and is now motivating implementation,” said ITU Secretary-General Houlin Zhao in closing remarks to the summit. “AI and data need to be a shared resource if we are serious about scaling AI for good. The community supporting the Summit is creating infrastructure to scale-up their collaboration − to convert the principles underlying the Summit into global impact.”…

The AI Commons will provide an open framework for collaboration, a decentralized system to democratize problem solving with AI.

It aims to be a “knowledge space”, says Banifatemi, answering a key question: “How can problem solving with AI become common knowledge?”

“The goal is to be an open initiative, like a Linux effort, like an open-source network, where everyone can participate and we jointly share and we create an abundance of knowledge, knowledge of how we can solve problems with AI,” said Banifatemi.

AI development and application will build on the state of the art, enabling AI solutions to scale with the help of shared datasets, testing and simulation environments, AI models and associated software, and storage and computing resources….(More)”.

AI and the Global South: Designing for Other Worlds


Chapter by Chinmayi Arun in Markus D. Dubber, Frank Pasquale, and Sunit Das (eds.), The Oxford Handbook of Ethics of AI: “This chapter is about the ways in which AI affects, and will continue to affect, the Global South. It highlights why the design and deployment of AI in the South should concern us. 

Towards this, it discusses what is meant by the South. The term has a history connected with the ‘Third World’ and has referred to countries that share post-colonial history and certain development goals. However scholars have expanded and refined on it to include different kinds of marginal, disenfranchised populations such that the South is now a plural concept – there are Souths. 

The risks of the ways in which AI affects Southern populations include concerns of discrimination, bias, oppression, exclusion and bad design. These can be exacerbated in the context of vulnerable populations, especially those without access to human rights law or institutional remedies. This Chapter outlines these risks as well as the international human rights law that is applicable. It argues that a human rights, centric, inclusive, empowering context-driven approach is necessary….(More)”.

Number of fact-checking outlets surges to 188 in more than 60 countries


Mark Stencel at Poynter: “The number of fact-checking outlets around the world has grown to 188 in more than 60 countries amid global concerns about the spread of misinformation, according to the latest tally by the Duke Reporters’ Lab.

Since the last annual fact-checking census in February 2018, we’ve added 39 more outlets that actively assess claims from politicians and social media, a 26% increase. The new total is also more than four times the 44 fact-checkers we counted when we launched our global database and map in 2014.

Globally, the largest growth came in Asia, which went from 22 to 35 outlets in the past year. Nine of the 27 fact-checking outlets that launched since the start of 2018 were in Asia, including six in India. Latin American fact-checking also saw a growth spurt in that same period, with two new outlets in Costa Rica, and others in Mexico, Panama and Venezuela.

The actual worldwide total is likely much higher than our current tally. That’s because more than a half-dozen of the fact-checkers we’ve added to the database since the start of 2018 began as election-related partnerships that involved the collaboration of multiple organizations. And some those election partners are discussing ways to continue or reactivate that work— either together or on their own.

Over the past 12 months, five separate multimedia partnerships enlisted more than 60 different fact-checking organizations and other news companies to help debunk claims and verify information for voters in MexicoBrazilSweden,Nigeria and the Philippines. And the Poynter Institute’s International Fact-Checking Network assembled a separate team of 19 media outlets from 13 countries to consolidate and share their reporting during the run-up to last month’s elections for the European Parliament. Our database includes each of these partnerships, along with several others— but not each of the individual partners. And because they were intentionally short-run projects, three of these big partnerships appear among the 74 inactive projects we also document in our database.

Politics isn’t the only driver for fact-checkers. Many outlets in our database are concentrating efforts on viral hoaxes and other forms of online misinformation — often in coordination with the big digital platforms on which that misinformation spreads.

We also continue to see new topic-specific fact-checkers such as Metafact in Australia and Health Feedback in France— both of which launched in 2018 to focus on claims about health and medicine for a worldwide audience….(More)”.

The Tricky Ethics of Using YouTube Videos for Academic Research


Jane C.Hu in P/S Magazine: “…But just because something is legal doesn’t mean it’s ethical. That doesn’t mean it’s necessarily unethical, either, but it’s worth asking questions about how and why researchers use social media posts, and whether those uses could be harmful. I was once a researcher who had to obtain human-subjects approval from a university institutional review board, and I know it can be a painstaking application process with long wait times. Collecting data from individuals takes a long time too. If you could just sub in YouTube videos in place of collecting your own data, that saves time, money, and effort. But that could be at the expense of the people whose data you’re scraping.

But, you might say, if people don’t want to be studied online, then they shouldn’t post anything. But most people don’t fully understand what “publicly available” really means or its ramifications. “You might know intellectually that technically anyone can see a tweet, but you still conceptualize your audience as being your 200 Twitter followers,” Fiesler says. In her research, she’s found that the majority of people she’s polled have no clue that researchers study public tweets.

Some may disagree that it’s researchers’ responsibility to work around social media users’ ignorance, but Fiesler and others are calling for their colleagues to be more mindful about any work that uses publicly available data. For instance, Ashley Patterson, an assistant professor of language and literacy at Penn State University, ultimately decided to use YouTube videos in her dissertation work on biracial individuals’ educational experiences. That’s a decision she arrived at after carefully considering her options each step of the way. “I had to set my own levels of ethical standards and hold myself to it, because I knew no one else would,” she says. One of Patterson’s first steps was to ask herself what YouTube videos would add to her work, and whether there were any other ways to collect her data. “It’s not a matter of whether it makes my life easier, or whether it’s ‘just data out there’ that would otherwise go to waste. The nature of my question and the response I was looking for made this an appropriate piece [of my work],” she says.

Researchers may also want to consider qualitative, hard-to-quantify contextual cues when weighing ethical decisions. What kind of data is being used? Fiesler points out that tweets about, say, a television show are way less personal than ones about a sensitive medical condition. Anonymized written materials, like Facebook posts, could be less invasive than using someone’s face and voice from a YouTube video. And the potential consequences of the research project are worth considering too. For instance, Fiesler and other critics have pointed out that researchers who used YouTube videos of people documenting their experience undergoing hormone replacement therapy to train an artificial intelligence to identify trans people could be putting their unwitting participants in danger. It’s not obvious how the results of Speech2Face will be used, and, when asked for comment, the paper’s researchers said they’d prefer to quote from their paper, which pointed to a helpful purpose: providing a “representative face” based on the speaker’s voice on a phone call. But one can also imagine dangerous applications, like doxing anonymous YouTubers.

One way to get ahead of this, perhaps, is to take steps to explicitly inform participants their data is being used. Fiesler says that, when her team asked people how they’d feel after learning their tweets had been used for research, “not everyone was necessarily super upset, but most people were surprised.” They also seemed curious; 85 percent of participants said that, if their tweet were included in research, they’d want to read the resulting paper. “In human-subjects research, the ethical standard is informed consent, but inform and consent can be pulled apart; you could potentially inform people without getting their consent,” Fiesler suggests….(More)”.

How to use data for good — 5 priorities and a roadmap


Stefaan Verhulst at apolitical: “…While the overarching message emerging from these case studies was promising, several barriers were identified that if not addressed systematically could undermine the potential of data science to address critical public needs and limit the opportunity to scale the practice more broadly.

Below we summarise the five priorities that emerged through the workshop for the field moving forward.

1. Become People-Centric

Much of the data currently used for drawing insights involve or are generated by people.

These insights have the potential to impact people’s lives in many positive and negative ways. Yet, the people and the communities represented in this data are largely absent when practitioners design and develop data for social good initiatives.

To ensure data is a force for positive social transformation (i.e., they address real people’s needs and impact lives in a beneficiary way), we need to experiment with new ways to engage people at the design, implementation, and review stage of data initiatives beyond simply asking for their consent.

(Photo credit: Image from the people-led innovation report)

As we explain in our People-Led Innovation methodology, different segments of people can play multiple roles ranging from co-creation to commenting, reviewing and providing additional datasets.

The key is to ensure their needs are front and center, and that data science for social good initiatives seek to address questions related to real problems that matter to society-at-large (a key concern that led The GovLab to instigate 100 Questions Initiative).

2. Establish Data About the Use of Data (for Social Good)

Many data for social good initiatives remain fledgling.

As currently designed, the field often struggles with translating sound data projects into positive change. As a result, many potential stakeholders—private sector and government “owners” of data as well as public beneficiaries—remain unsure about the value of using data for social good, especially against the background of high risks and transactions costs.

The field needs to overcome such limitations if data insights and its benefits are to spread. For that, we need hard evidence about data’s positive impact. Ironically, the field is held back by an absence of good data on the use of data—a lack of reliable empirical evidence that could guide new initiatives.

The field needs to prioritise developing a far more solid evidence base and “business case” to move data for social good from a good idea to reality.

3. Develop End-to-End Data Initiatives

Too often, data for social good focus on the “data-to-knowledge” pipeline without focusing on how to move “knowledge into action.”

As such, the impact remains limited and many efforts never reach an audience that can actually act upon the insights generated. Without becoming more sophisticated in our efforts to provide end-to-end projects and taking “data from knowledge to action,” the positive impact of data will be limited….

4. Invest in Common Trust and Data Steward Mechanisms 

For data for social good initiatives (including data collaboratives) to flourish and scale, there must be substantial trust between all parties involved; and amongst the public-at-large.

Establishing such a platform of trust requires each actor to invest in developing essential trust mechanisms such as data governance structures, contracts, and dispute resolution methods. Today, designing and establishing these mechanisms take tremendous time, energy, and expertise. These high transaction costs result from the lack of common templates and the need to each time design governance structures from scratch…

5. Build Bridges Across Cultures

As C.P. Snow famously described in his lecture on “Two Cultures and the Scientific Revolution,” we must bridge the “two cultures” of science and humanism if we are to solve the world’s problems….

To implement these five priorities we will need experimentation at the operational but also institutional level. This involves the establishment of “data stewards” within organisations that can accelerate data for social good initiative in a responsible manner integrating the five priorities above….(More)”