Reclaiming Civic Spaces


Special edition of  Sur International Journal on Human Rights on crackdowns on civil society around the world: “As shown by both the geographic reach of the contributions (authors from 16 countries) and the infographics to this edition, the issue is clearly of global concern. The first section of the journal seeks to address why this crackdown is happening, who is driving it and whether there is cross-fertilisation of ideas between actors.

The edition then focuses on the strategies that activists are implementing to combat the crackdown. A summary of these strategies can be seen in a video which captures a number of the author activists’ perspectives, shared when they gathered in São Paulo in October 2017 for a writers’ retreat….

The role of new media and online spaces in combatting the crackdown is prevalent in the contributions. The ease and speed with which information can be passed on platforms such as Facebook, Twitter, WhatsApp and Telegram was cited as being important in mobilising support rapidly as well as helping reach previously untapped constituents (Sara AlsherifZoya RehmanRaull SantiagoVictoria OhaeriValerie Msoka and Denise Dora, Ravindran Daniel and Barbara Klugman). Despite the opportunities, Bondita Acharya, Helen Kezie-Nwoha, Sondos Shabayek, Shalini Eddens and Susan JessopSara Alsherif and Zoya Rehman all note the challenges that digital tools present. Harassment of activists online is becoming increasingly common, particularly towards women. In addition, authorities are constantly developing new ways of monitoring these platforms. To combat this, Sara Alsherif describes how developing relationships with tech companies can help activists stay one step ahead of the curve.

The use of video is explored by Hagai El-Ad and Raull Santiago, both of whom describe how the medium is an important tool in capturing the restrictions being inflicted on civil society in their respective contexts. Moreover, Raull Santiago describes how his collective is trying to use these video images, captured by members of his community, in legal processes against the police force….(More)”.

The Follower Factory


 Nicholas Confessore, Gabriel J.X. Dance, Richard Harris And Mark Hansen in The New York Times: “…Fake accounts, deployed by governments, criminals and entrepreneurs, now infest social media networks. By some calculations, as many as 48 million of Twitter’s reported active users — nearly 15 percent — are automated accounts designed to simulate real people, though the company claims that number is far lower.

In November, Facebook disclosed to investors that it had at least twice as many fake users as it previously estimated, indicating that up to 60 million automated accounts may roam the world’s largest social media platform. These fake accounts, known as bots, can help sway advertising audiences and reshape political debates. They can defraud businesses and ruin reputations. Yet their creation and sale fall into a legal gray zone.

“The continued viability of fraudulent accounts and interactions on social media platforms — and the professionalization of these fraudulent services — is an indication that there’s still much work to do,” said Senator Mark Warner, the Virginia Democrat and ranking member of the Senate Intelligence Committee, which has been investigating the spread of fake accounts on Facebook, Twitter and other platforms.

Despite rising criticism of social media companies and growing scrutiny by elected officials, the trade in fake followers has remained largely opaque. While Twitter and other platforms prohibit buying followers, Devumi and dozens of other sites openly sell them. And social media companies, whose market value is closely tied to the number of people using their services, make their own rules about detecting and eliminating fake accounts.

Devumi’s founder, German Calas, denied that his company sold fake followers and said he knew nothing about social identities stolen from real users. “The allegations are false, and we do not have knowledge of any such activity,” Mr. Calas said in an email exchange in November.

The Times reviewed business and court records showing that Devumi has more than 200,000 customers, including reality television stars, professional athletes, comedians, TED speakers, pastors and models. In most cases, the records show, they purchased their own followers. In others, their employees, agents, public relations companies, family members or friends did the buying. For just pennies each — sometimes even less — Devumi offers Twitter followers, views on YouTube, plays on SoundCloud, the music-hosting site, and endorsements on LinkedIn, the professional-networking site….(More)”.

A Roadmap to a Nationwide Data Infrastructure for Evidence-Based Policymaking


Introduction by Julia Lane and Andrew Reamer of a Special Issue of the Annals of the American Academy of Political and Social Science: “Throughout the United States, there is broad interest in expanding the nation’s capacity to design and implement public policy based on solid evidence. That interest has been stimulated by the new types of data that are available that can transform the way in which policy is designed and implemented. Yet progress in making use of sensitive data has been hindered by the legal, technical, and operational obstacles to access for research and evaluation. Progress has also been hindered by an almost exclusive focus on the interest and needs of the data users, rather than the interest and needs of the data providers. In addition, data stewardship is largely artisanal in nature.

There are very real consequences that result from lack of action. State and local governments are often hampered in their capacity to effectively mount and learn from innovative efforts. Although jurisdictions often have treasure troves of data from existing programs, the data are stove-piped, underused, and poorly maintained. The experience reported by one large city public health commissioner is too common: “We commissioners meet periodically to discuss specific childhood deaths in the city. In most cases, we each have a thick file on the child or family. But the only time we compare notes is after the child is dead.”1 In reality, most localities lack the technical, analytical, staffing, and legal capacity to make effective use of existing and emerging resources.

It is our sense that fundamental changes are necessary and a new approach must be taken to building data infrastructures. In particular,

  1. Privacy and confidentiality issues must be addressed at the beginning—not added as an afterthought.
  2. Data providers must be involved as key stakeholders throughout the design process.
  3. Workforce capacity must be developed at all levels.
  4. The scholarly community must be engaged to identify the value to research and policy….

To develop a roadmap for the creation of such an infrastructure, the Bill and Melinda Gates Foundation, together with the Laura and John Arnold Foundation, hosted a day-long workshop of more than sixty experts to discuss the findings of twelve commissioned papers and their implications for action. This volume of The ANNALS showcases those twelve articles. The workshop papers were grouped into three thematic areas: privacy and confidentiality, the views of data producers, and comprehensive strategies that have been used to build data infrastructures in other contexts. The authors and the attendees included computer scientists, social scientists, practitioners, and data producers.

This introductory article places the research in both an historical and a current context. It also provides a framework for understanding the contribution of the twelve articles….(More)”.

Can scientists learn to make ‘nature forecasts’ just as we forecast the weather?


 at The Conversation: “We all take weather forecasts for granted, so why isn’t there a ‘nature forecast’ to answer these questions? Enter the new scientific field of ecological forecasting. Ecologists have long sought to understand the natural world, but only recently have they begun to think systematically about forecasting.

Much of the current research in ecological forecasting is focused on long-term projections. It considers questions that play out over decades to centuries, such as how species may shift their ranges in response to climate change, or whether forests will continue to take up carbon dioxide from the atmosphere.

However, in a new article that I co-authored with 18 other scientists from universities, private research institutes and the U.S. Geological Survey, we argue that focusing on near-term forecasts over spans of days, seasons and years will help us better understand, manage and conserve ecosystems. Developing this ability would be a win-win for both science and society….

Big data is driving many of the advances in ecological forecasting. Today ecologists have orders of magnitude more data compared to just a decade ago, thanks to sustained public funding for basic science and environmental monitoring. This investment has given us better sensors, satellites and organizations such as the National Ecological Observatory Network, which collects high-quality data from 81 field sites across the United States and Puerto Rico. At the same time, cultural shifts across funding agencies, research networks and journals have made that data more open and available.

Digital technologies make it possible to access this information more quickly than in the past. Field notebooks have given way to tablets and cell networks that can stream new data into supercomputers in real time. Computing advances allow us to build better models and use more sophisticated statistical methods to produce forecasts….(More)”.

Studying Migrant Assimilation Through Facebook Interests


Antoine DuboisEmilio ZagheniKiran Garimella, and Ingmar Weber at arXiv: “Migrants’ assimilation is a major challenge for European societies, in part because of the sudden surge of refugees in recent years and in part because of long-term demographic trends. In this paper, we use Facebook’s data for advertisers to study the levels of assimilation of Arabic-speaking migrants in Germany, as seen through the interests they express online. Our results indicate a gradient of assimilation along demographic lines, language spoken and country of origin. Given the difficulty to collect timely migration data, in particular for traits related to cultural assimilation, the methods that we develop and the results that we provide open new lines of research that computational social scientists are well-positioned to address….(More)”.

Rights-Based and Tech-Driven: Open Data, Freedom of Information, and the Future of Government Transparency


Beth Noveck at the Yale Human Rights and Development Journal: “Open data policy mandates that government proactively publish its data online for the public to reuse. It is a radically different approach to transparency than traditional right-to-know strategies as embodied in Freedom of Information Act (FOIA) legislation in that it involves ex ante rather than ex post disclosure of whole datasets. Although both open data and FOIA deal with information sharing, the normative essence of open data is participation rather than litigation. By fostering public engagement, open data shifts the relationship between state and citizen from a monitorial to a collaborative one, centered around using information to solve problems together. This Essay explores the theory and practice of open data in comparison to FOIA and highlights its uses as a tool for advancing human rights, saving lives, and strengthening democracy. Although open data undoubtedly builds upon the fifty-year legal tradition of the right to know about the workings of one’s government, open data does more than advance government accountability. Rather, it is a distinctly twenty-first century governing practice borne out of the potential of big data to help solve society’s biggest problems. Thus, this Essay charts a thoughtful path toward a twenty-first century transparency regime that takes advantage of and blends the strengths of open data’s collaborative and innovation-centric approach and the adversarial and monitorial tactics of freedom of information regimes….(More)”.

How AI Could Help the Public Sector


Emma Martinho-Truswell in the Harvard Business Review: “A public school teacher grading papers faster is a small example of the wide-ranging benefits that artificial intelligence could bring to the public sector. A.I could be used to make government agencies more efficient, to improve the job satisfaction of public servants, and to increase the quality of services offered. Talent and motivation are wasted doing routine tasks when they could be doing more creative ones.

Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world. In addition to education, public servants are using AI to help them make welfare payments and immigration decisions, detect fraud, plan new infrastructure projects, answer citizen queries, adjudicate bail hearings, triage health care cases, and establish drone paths.  The decisions we are making now will shape the impact of artificial intelligence on these and other government functions. Which tasks will be handed over to machines? And how should governments spend the labor time saved by artificial intelligence?

So far, the most promising applications of artificial intelligence use machine learning, in which a computer program learns and improves its own answers to a question by creating and iterating algorithms from a collection of data. This data is often in enormous quantities and from many sources, and a machine learning algorithm can find new connections among data that humans might not have expected. IBM’s Watson, for example, is a treatment recommendation-bot, sometimes finding treatments that human doctors might not have considered or known about.

Machine learning program may be better, cheaper, faster, or more accurate than humans at tasks that involve lots of data, complicated calculations, or repetitive tasks with clear rules. Those in public service, and in many other big organizations, may recognize part of their job in that description. The very fact that government workers are often following a set of rules — a policy or set of procedures — already presents many opportunities for automation.

To be useful, a machine learning program does not need to be better than a human in every case. In my work, we expect that much of the “low hanging fruit” of government use of machine learning will be as a first line of analysis or decision-making. Human judgment will then be critical to interpret results, manage harder cases, or hear appeals.

When the work of public servants can be done in less time, a government might reduce its staff numbers, and return money saved to taxpayers — and I am sure that some governments will pursue that option. But it’s not necessarily the one I would recommend. Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program….(More)”.

Algorithms show potential in measuring diagnostic errors using big data


Greg Slabodkin at Information Management: “While the problem of diagnostic errors is widespread in medicine, with an estimated 12 million Americans affected annually, a new approach to quantifying and monitoring these errors has the potential to prevent serious patient injuries, including disability or death.

“The single biggest impediment to making progress is the lack of operational measures of diagnostic errors,” says David Newman-Toker, MD, director of the Johns Hopkins Armstrong Institute Center for Diagnostic Excellence. “It’s very difficult to measure because we haven’t had the tools to look for it in a systematic way. And most of the methods that look for diagnostics errors involve training people to do labor-intensive chart reviews.”

However, a new method—called the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE)—uncovers misdiagnosis-related harms using specific algorithms and big data. The automated approach could replace labor-intensive reviews of medical records by hospital staff, which researchers contend are limited by poor clinical documentation, low reliability and inherent bias.

According to Newman-Toker, SPADE utilizes statistical analyses to identify critical patterns that measure the rate of diagnostic error by analyzing large, existing clinical and claims datasets containing hundreds of thousands of patient visits. Specifically, algorithms are leveraged to look for common symptoms prompting a physician visit and then pairing them with one or more diseases that could be misdiagnosed in those clinical contexts….(More)”.

Is Social Media Good or Bad for Democracy?


Essay by Cass R. Sunstein,  as  part of a series by Facebook on social media and democracy: “On balance, the question of whether social media platforms are good for democracy is easy. On balance, they are not merely good; they are terrific. For people to govern themselves, they need to have information. They also need to be able to convey it to others. Social media platforms make that tons easier.

There is a subtler point as well. When democracies are functioning properly, people’s sufferings and challenges are not entirely private matters. Social media platforms help us alert one another to a million and one different problems. In the process, the existence of social media can prod citizens to seek solutions.

Consider the remarkable finding, by the economist Amartya Sen, that in the history of the world, there has never been a famine in a system with a democratic press and free elections. A central reason is that famines are a product not only of a scarcity of food, but also a nation’s failure to provide solutions. When the press is free, and when leaders are elected, leaders have a strong incentive to help.

Mental illness, chronic pain, loss of employment, vulnerability to crime, drugs in the family – information about all these spread via social media, and they can be reduced with sensible policies. When people can talk to each other, and disclose what they know to public officials, the whole world might change in a hurry.

But celebrations can be awfully boring, so let’s hold the applause. Are automobiles good for transportation? Absolutely, but in the United States alone, over 35,000 people died in crashes in 2016.

Social media platforms are terrific for democracy in many ways, but pretty bad in others. And they remain a work-in-progress, not only because of new entrants, but also because the not-so-new ones (including Facebook) continue to evolve. What John Dewey said about my beloved country is true for social media as well: “The United States are not yet made; they are not a finished fact to be categorically assessed.”

For social media and democracy, the equivalents of car crashes include false reports (“fake news”) and the proliferation of information cocoons — and as a result, an increase in fragmentation, polarization and extremism. If you live in an information cocoon, you will believe many things that are false, and you will fail to learn countless things that are true. That’s awful for democracy. And as we have seen, those with specific interests — including politicians and nations, such as Russia, seeking to disrupt democratic processes — can use social media to promote those interests.

This problem is linked to the phenomenon of group polarization — which takes hold when like-minded people talk to one another and end up thinking a more extreme version of what they thought before they started to talk. In fact that’s a common outcome. At best, it’s a problem. At worst, it’s dangerous….(More)”.

How the Data That Internet Companies Collect Can Be Used for the Public Good


Stefaan G. Verhulst and Andrew Young at Harvard Business Review: “…In particular, the vast streams of data generated through social media platforms, when analyzed responsibly, can offer insights into societal patterns and behaviors. These types of behaviors are hard to generate with existing social science methods. All this information poses its own problems, of complexity and noise, of risks to privacy and security, but it also represents tremendous potential for mobilizing new forms of intelligence.

In a recent report, we examine ways to harness this potential while limiting and addressing the challenges. Developed in collaboration with Facebook, the report seeks to understand how public and private organizations can join forces to use social media data — through data collaboratives — to mitigate and perhaps solve some our most intractable policy dilemmas.

Data Collaboratives: Public-Private Partnerships for Our Data Age 

For all of data’s potential to address public challenges, most data generated today is collected by the private sector. Typically ensconced in corporate databases, and tightly held in order to maintain competitive advantage, this data contains tremendous possible insights and avenues for policy innovation. But because the analytical expertise brought to bear on it is narrow, and limited by private ownership and access restrictions, its vast potential often goes untapped.

Data collaboratives offer a way around this limitation. They represent an emerging public-private partnership model, in which participants from different areas , including the private sector, government, and civil society , can come together to exchange data and pool analytical expertise in order to create new public value. While still an emerging practice, examples of such partnerships now exist around the world, across sectors and public policy domains….

Professionalizing the Responsible Use of Private Data for Public Good

For all its promise, the practice of data collaboratives remains ad hoc and limited. In part, this is a result of the lack of a well-defined, professionalized concept of data stewardship within corporations. Today, each attempt to establish a cross-sector partnership built on the analysis of social media data requires significant and time-consuming efforts, and businesses rarely have personnel tasked with undertaking such efforts and making relevant decisions.

As a consequence, the process of establishing data collaboratives and leveraging privately held data for evidence-based policy making and service delivery is onerous, generally one-off, not informed by best practices or any shared knowledge base, and prone to dissolution when the champions involved move on to other functions.

By establishing data stewardship as a corporate function, recognized within corporations as a valued responsibility, and by creating the methods and tools needed for responsible data-sharing, the practice of data collaboratives can become regularized, predictable, and de-risked.

If early efforts toward this end — from initiatives such as Facebook’s Data for Good efforts in the social media space and MasterCard’s Data Philanthropy approach around finance data — are meaningfully scaled and expanded, data stewards across the private sector can act as change agents responsible for determining what data to share and when, how to protect data, and how to act on insights gathered from the data.

Still, many companies (and others) continue to balk at the prospect of sharing “their” data, which is an understandable response given the reflex to guard corporate interests. But our research has indicated that many benefits can accrue not only to data recipients but also to those who share it. Data collaboration is not a zero-sum game.

With support from the Hewlett Foundation, we are embarking on a two-year project toward professionalizing data stewardship (and the use of data collaboratives) and establishing well-defined data responsibility approaches. We invite others to join us in working to transform this practice into a widespread, impactful means of leveraging private-sector assets, including social media data, to create positive public-sector outcomes around the world….(More)”.