The promise and peril of a digital ecosystem for the planet


Blog post by Jillian Campbell and David E Jensen: “A range of frontier and digital technologies have dramatically boosted the ways in which we can monitor the health of our planet. And sustain our future on it (Figure 1).

Figure 1. A range of frontier an digital technologies can be combined to monitor our planet and the sustainable use of natural resources (1)

If we can leverage this technology effectively, we will be able to assess and predict risks, increase transparency and accountability in the management of natural resources and inform markets as well as consumer choice. These actions are all required if we are to stand a better chance of achieving the Sustainable Development Goals (SDGs).

However, for this vision to become a reality, public and private sector actors must take deliberate action and collaborate to build a global digital ecosystem for the planet — one consisting of data, infrastructure, rapid analytics, and real-time insights. We are now at a pivotal moment in the history of our stewardship of this planet. A “tipping point” of sorts. And in order to guide the political action which is required to counter the speed, scope and severity of the environmental and climate crises, we must acquire and deploy these data sets and frontier technologies. Doing so can fundamentally change our economic trajectory and underpin a sustainable future.

This article shows how such a global digital ecosystem for the planet can be achieved — as well as what we risk if we do not take decisive action within the next 12 months….(More)”.

Guide to Mobile Data Analytics in Refugee Scenarios


Book edited Albert Ali Salah, Alex Pentland, Bruno Lepri and Emmanuel Letouzé: “After the start of the Syrian Civil War in 2011–12, increasing numbers of civilians sought refuge in neighboring countries. By May 2017, Turkey had received over 3 million refugees — the largest r efugee population in the world. Some lived in government-run camps near the Syrian border, but many have moved to cities looking for work and better living conditions. They faced problems of integration, income, welfare, employment, health, education, language, social tension, and discrimination. In order to develop sound policies to solve these interlinked problems, a good understanding of refugee dynamics is necessary.

This book summarizes the most important findings of the Data for Refugees (D4R) Challenge, which was a non-profit project initiated to improve the conditions of the Syrian refugees in Turkey by providing a database for the scientific community to enable research on urgent problems concerning refugees. The database, based on anonymized mobile call detail records (CDRs) of phone calls and SMS messages of one million Turk Telekom customers, indicates the broad activity and mobility patterns of refugees and citizens in Turkey for the year 1 January to 31 December 2017. Over 100 teams from around the globe applied to take part in the challenge, and 61 teams were granted access to the data.

This book describes the challenge, and presents selected and revised project reports on the five major themes: unemployment, health, education, social integration, and safety, respectively. These are complemented by additional invited chapters describing related projects from international governmental organizations, technological infrastructure, as well as ethical aspects. The last chapter includes policy recommendations, based on the lessons learned.

The book will serve as a guideline for creating innovative data-centered collaborations between industry, academia, government, and non-profit humanitarian agencies to deal with complex problems in refugee scenarios. It illustrates the possibilities of big data analytics in coping with refugee crises and humanitarian responses, by showcasing innovative approaches drawing on multiple data sources, information visualization, pattern analysis, and statistical analysis.It will also provide researchers and students working with mobility data with an excellent coverage across data science, economics, sociology, urban computing, education, migration studies, and more….(More)”.

Weaponized Interdependence: How Global Economic Networks Shape State Coercion


Henry Farrell and Abraham L. Newman in International Security: “Liberals claim that globalization has led to fragmentation and decentralized networks of power relations. This does not explain how states increasingly “weaponize interdependence” by leveraging global networks of informational and financial exchange for strategic advantage. The theoretical literature on network topography shows how standard models predict that many networks grow asymmetrically so that some nodes are far more connected than others. This model nicely describes several key global economic networks, centering on the United States and a few other states. Highly asymmetric networks allow states with (1) effective jurisdiction over the central economic nodes and (2) appropriate domestic institutions and norms to weaponize these structural advantages for coercive ends. In particular, two mechanisms can be identified. First, states can employ the “panopticon effect” to gather strategically valuable information. Second, they can employ the “chokepoint effect” to deny network access to adversaries. Tests of the plausibility of these arguments across two extended case studies that provide variation both in the extent of U.S. jurisdiction and in the presence of domestic institutions—the SWIFT financial messaging system and the internet—confirm the framework’s expectations. A better understanding of the policy implications of the use and potential overuse of these tools, as well as the response strategies of targeted states, will recast scholarly debates on the relationship between economic globalization and state coercion….(More)”

#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter


Paper by Sarah Oates, and John Gray: “Reports of Russian interference in U.S. elections have raised grave concerns about the spread of foreign disinformation on social media sites, but there is little detailed analysis that links traditional political communication theory to social media analytics. As a result, it is difficult for researchers and analysts to gauge the nature or level of the threat that is disseminated via social media. This paper leverages both social science and data science by using traditional content analysis and Twitter analytics to trace how key aspects of Russian strategic narratives were distributed via #skripal, #mh17, #Donetsk, and #russophobia in late 2018.

This work will define how key Russian international communicative goals are expressed through strategic narratives, describe how to find hashtags that reflect those narratives, and analyze user activity around the hashtags. This tests both how Twitter amplifies specific information goals of the Russians as well as the relative success (or failure) of particular hashtags to spread those messages effectively. This research uses Mentionmapp, a system co-developed by one of the authors (Gray) that employs network analytics and machine intelligence to identify the behavior of Twitter users as well as generate profiles of users via posting history and connections. This study demonstrates how political communication theory can be used to frame the study of social media; how to relate knowledge of Russian strategic priorities to labels on social media such as Twitter hashtags; and to test this approach by examining a set of Russian propaganda narratives as they are represented by hashtags. Our research finds that some Twitter users are consistently active across multiple Kremlin-linked hashtags, suggesting that knowledge of these hashtags is an important way to identify Russian propaganda online influencers. More broadly, we suggest that Twitter dichotomies such as bot/human or troll/citizen should be used with caution and analysis should instead address the nuances in Twitter use that reflect varying levels of engagement or even awareness in spreading foreign disinformation online….(More)”.

Is Privacy and Personal Data Set to Become the New Intellectual Property?


Paper by Leon Trakman, Robert Walters, and Bruno Zeller: “A pressing concern today is whether the rationale underlying the protection of personal data is itself a meaningful foundation for according intellectual property (IP) rights in personal data to data subjects. In particular, are there particular technological attributes about the collection, use and processing of personal data on the Internet, and global access to that data, that provide a strong justification to extend IP rights to data subjects? A central issue in so determining is whether data subjects need the protection of such rights in a technological revolution in which they are increasingly exposed to the use and abuse of their personal data. A further question is how IP law can provide them with the requisite protection of their private space, or whether other means of protecting personal data, such as through general contract rights, render IP protections redundant, or at least, less necessary. This paper maintains that lawmakers often fail to distinguish between general property and IP protection of personal data; that IP protection encompasses important attributes of both property and contract law; and that laws that implement IP protection in light of its sui generis attributes are more fitting means of protecting personal data than the alternatives. The paper demonstrates that one of the benefits of providing IP rights in personal data goes some way to strengthening data subjects’ control and protection over their personal data and strengthening data protection law more generally. It also argues for greater harmonization of IP law across jurisdictions to ensure that the protection of personal data becomes more coherent and internationally sustainable….(More)”.

Raw data won’t solve our problems — asking the right questions will


Stefaan G. Verhulst in apolitical: “If I had only one hour to save the world, I would spend fifty-five minutes defining the questions, and only five minutes finding the answers,” is a famous aphorism attributed to Albert Einstein.

Behind this quote is an important insight about human nature: Too often, we leap to answers without first pausing to examine our questions. We tout solutions without considering whether we are addressing real or relevant challenges or priorities. We advocate fixes for problems, or for aspects of society, that may not be broken at all.

This misordering of priorities is especially acute — and represents a missed opportunity — in our era of big data. Today’s data has enormous potential to solve important public challenges.

However, policymakers often fail to invest in defining the questions that matter, focusing mainly on the supply side of the data equation (“What data do we have or must have access to?”) rather than the demand side (“What is the core question and what data do we really need to answer it?” or “What data can or should we actually use to solve those problems that matter?”).

As such, data initiatives often provide marginal insights while at the same time generating unnecessary privacy risks by accessing and exploring data that may not in fact be needed at all in order to address the root of our most important societal problems.

A new science of questions

So what are the truly vexing questions that deserve attention and investment today? Toward what end should we strategically seek to leverage data and AI?

The truth is that policymakers and other stakeholders currently don’t have a good way of defining questions or identifying priorities, nor a clear framework to help us leverage the potential of data and data science toward the public good.

This is a situation we seek to remedy at The GovLab, an action research center based at New York University.

Our most recent project, the 100 Questions Initiative, seeks to begin developing a new science and practice of questions — one that identifies the most urgent questions in a participatory manner. Launched last month, the goal of this project is to develop a process that takes advantage of distributed and diverse expertise on a range of given topics or domains so as to identify and prioritize those questions that are high impact, novel and feasible.

Because we live in an age of data and much of our work focuses on the promises and perils of data, we seek to identify the 100 most pressing problems confronting the world that could be addressed by greater use of existing, often inaccessible, datasets through data collaboratives – new forms of cross-disciplinary collaboration beyond public-private partnerships focused on leveraging data for good….(More)”.

Could footnotes be the key to winning the disinformation wars?


Karin Wulf at the Washington Post: “We are at a distinctive point in the relationship between information and democracy: As the volume of information dissemination has grown, so too have attempts by individuals and groups to weaponize disinformation for commercial and political purposes. This has contributed to fragmentation, political polarization, cynicism, and distrust in institutions and expertise, as a recent Pew Research Center report found. So what is the solution?

Footnotes.

Outside of academics and lawyers, few people may think about footnotes once they leave school. Indeed, there is a hackneyed caricature about footnotes as pedantry, the purview of tweedy scholars blinking as we emerge from fluorescent-lit libraries into the sun — not the concern of regular folks. A recent essay in the Economist even laid some of Britain’s recent woes at the feet of historians who spend too much time “fiddling with footnotes.”

But nothing could be further from the truth. More than ever, we need what this tool provides: accountability and transparency. “Fiddling with footnotes” is the kind of hygienic practice that our era of information pollution needs — and needs to be shared as widely as possible. Footnotes are for everyone.

Though they began as an elite practice, footnotes became aligned historically with modern democracy itself. Citation is rooted in the 17th-century emergence of enlightenment science, which asked for evidence rather than faith as key to supporting a conclusion. It was an era when scientific empiricism threatened the authority of government and religious institutions and newly developing institutional science publications, the Philosophical Transactions of the Royal Society, for example, began to use citations for evidence and reference. In one of Isaac Newton’s contributions to the journal in 1673, a reply to queries about his work on light and the color spectrum, he used citations to his initial publication on the subject (“see no. 80. Page 3075”).

By the 18th century, and with more agile printing, the majority of scientific publications included citations, and the bottom of the page was emerging as the preferred placement. Where scientific scholarship traveled, humanists were not far behind. The disdain of French philosopher and mathematician René Descartes for any discipline without rigorous methods was part of the prompt for historians to embrace citations….(More)”.

Misinformation Has Created a New World Disorder


Claire Wardle at Scientific American: “…Online misinformation has been around since the mid-1990s. But in 2016 several events made it broadly clear that darker forces had emerged: automation, microtargeting and coordination were fueling information campaigns designed to manipulate public opinion at scale. Journalists in the Philippines started raising flags as Rodrigo Duterte rose to power, buoyed by intensive Facebook activity. This was followed by unexpected results in the Brexit referendum in June and then the U.S. presidential election in November—all of which sparked researchers to systematically investigate the ways in which information was being used as a weapon.

During the past three years the discussion around the causes of our polluted information ecosystem has focused almost entirely on actions taken (or not taken) by the technology companies. But this fixation is too simplistic. A complex web of societal shifts is making people more susceptible to misinformation and conspiracy. Trust in institutions is falling because of political and economic upheaval, most notably through ever widening income inequality. The effects of climate change are becoming more pronounced. Global migration trends spark concern that communities will change irrevocably. The rise of automation makes people fear for their jobs and their privacy.

Bad actors who want to deepen existing tensions understand these societal trends, designing content that they hope will so anger or excite targeted users that the audience will become the messenger. The goal is that users will use their own social capital to reinforce and give credibility to that original message.

Most of this content is designed not to persuade people in any particular direction but to cause confusion, to overwhelm and to undermine trust in democratic institutions from the electoral system to journalism. And although much is being made about preparing the U.S. electorate for the 2020 election, misleading and conspiratorial content did not begin with the 2016 presidential race, and it will not end after this one. As tools designed to manipulate and amplify content become cheaper and more accessible, it will be even easier to weaponize users as unwitting agents of disinformation….(More)”.

Credit: Jen Christiansen; Source: Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking, by Claire Wardle and Hossein Derakhshan. Council of Europe, October 2017

Fostering an Enabling Policy and Regulatory Environment in APEC for Data-Utilizing Businesses


APEC: “The objectives of this study is to better understand: 1) how firms from different sectors use data in their business models; and considering the significant increase in data-related policies and regulations enacted by governments across the world, 2) how such policies and regulations are affecting their use of data and hence business models. The study also tries: 3) to identify some of the middle-ground approaches that would enable governments to achieve public policy objectives, such as data security and privacy, and at the same time, also promote the growth of data-utilizing businesses. 39 firms from 12 economies have participated in this project and they come from a diverse group of industries, including aviation, logistics, shipping, payment services, encryption services, and manufacturing. The synthesis report can be found in Chapter 1 while the case study chapters can be found in Chapter 2 to 10….(More)”.

Governance sinkholes


Blog post by Geoff Mulgan: “Governance sinkholes appear when shifts in technology, society and the economy throw up the need for new arrangements. Each industrial revolution has created many governance sinkholes – and prompted furious innovation to fill them. The fourth industrial revolution will be no different. But most governments are too distracted to think about what to do to fill these holes, let alone to act. This blog sets out my diagnosis – and where I think the most work is needed to design new institutions….

It’s not too hard to get a map of the fissures and gaps – and to see where governance is needed but is missing. There are all too many of these now.

Here are a few examples. One is long-term care, currently missing adequate financing, regulation, information and navigation tools, despite its huge and growing significance. The obvious contrast is with acute healthcare, which, for all its problems, is rich in institutions and governance.

A second example is lifelong learning and training. Again, there is a striking absence of effective institutions to provide funding, navigation, policy and problem solving, and again, the contrast with the institution-rich fields of primary, secondary and tertiary education is striking. The position on welfare is not so different, as is the absence of institutions fit for purpose in supporting people in precarious work.

I’m particularly interested in another kind of sinkhole: the absence of the right institutions to handle data and knowledge – at global, national and local levels – now that these dominate the economy, and much of daily life. In field after field, there are huge potential benefits to linking data sets and connecting artificial and human intelligence to spot patterns or prevent problems. But we lack any institutions with either the skills or the authority to do this well, and in particular to think through the trade-offs between the potential benefits and the potential risks….(More)”.