AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.

Our Infant Information Revolution


Joseph Nye at Project Syndicate: “…When people are overwhelmed by the volume of information confronting them, it is hard to know what to focus on. Attention, not information, becomes the scarce resource. The soft power of attraction becomes an even more vital power resource than in the past, but so does the hard, sharp power of information warfare. And as reputation becomes more vital, political struggles over the creation and destruction of credibility multiply. Information that appears to be propaganda may not only be scorned, but may also prove counterproductive if it undermines a country’s reputation for credibility.

During the Iraq War, for example, the treatment of prisoners at Abu Ghraib and Guantanamo Bay in a manner inconsistent with America’s declared values led to perceptions of hypocrisy that could not be reversed by broadcasting images of Muslims living well in America. Similarly, President Donald Trump’s tweets that prove to be demonstrably false undercut American credibility and reduce its soft power.

The effectiveness of public diplomacy is judged by the number of minds changed (as measured by interviews or polls), not dollars spent. It is interesting to note that polls and the Portland index of the Soft Power 30show a decline in American soft power since the beginning of the Trump administration. Tweets can help to set the global agenda, but they do not produce soft power if they are not credible.

Now the rapidly advancing technology of artificial intelligence or machine learning is accelerating all of these processes. Robotic messages are often difficult to detect. But it remains to be seen whether credibility and a compelling narrative can be fully automated….(More)”.

On Preferring A to B, While Also Preferring B to A


Paper by Cass R. Sunstein: “In important contexts, people prefer option A to option B when they evaluate the two separately, but prefer option B to option A when they evaluate the two jointly. In consumer behavior, politics, and law, such preference reversals present serious puzzles about rationality and behavioral biases.

They are often a product of the pervasive problem of “evaluability.” Some important characteristics of options are difficult or impossible to assess in separate evaluation, and hence choosers disregard or downplay them; those characteristics are much easier to assess in joint evaluation, where they might be decisive. But in joint evaluation, certain characteristics of options may receive excessive weight, because they do not much affect people’s actual experience or because the particular contrast between joint options distorts people’s judgments. In joint as well as separate evaluation, people are subject to manipulation, though for different reasons.

It follows that neither mode of evaluation is reliable. The appropriate approach will vary depending on the goal of the task – increasing consumer welfare, preventing discrimination, achieving optimal deterrence, or something else. Under appropriate circumstances, global evaluation would be much better, but it is often not feasible. These conclusions bear on preference reversals in law and policy, where joint evaluation is often better, but where separate evaluation might ensure that certain characteristics or features of situations do not receive excessive weight…(More)”.

Technology and satellite companies open up a world of data


Gabriel Popkin at Nature: “In the past few years, technology and satellite companies’ offerings to scientists have increased dramatically. Thousands of researchers now use high-resolution data from commercial satellites for their work. Thousands more use cloud-computing resources provided by big Internet companies to crunch data sets that would overwhelm most university computing clusters. Researchers use the new capabilities to track and visualize forest and coral-reef loss; monitor farm crops to boost yields; and predict glacier melt and disease outbreaks. Often, they are analysing much larger areas than has ever been possible — sometimes even encompassing the entire globe. Such studies are landing in leading journals and grabbing media attention.

Commercial data and cloud computing are not panaceas for all research questions. NASA and the European Space Agency carefully calibrate the spectral quality of their imagers and test them with particular types of scientific analysis in mind, whereas the aim of many commercial satellites is to take good-quality, high-resolution pictures for governments and private customers. And no company can compete with Landsat’s free, publicly available, 46-year archive of images of Earth’s surface. For commercial data, scientists must often request images of specific regions taken at specific times, and agree not to publish raw data. Some companies reserve cloud-computing assets for researchers with aligned interests such as artificial intelligence or geospatial-data analysis. And although companies publicly make some funding and other resources available for scientists, getting access to commercial data and resources often requires personal connections. Still, by choosing the right data sources and partners, scientists can explore new approaches to research problems.

Mapping poverty

Joshua Blumenstock, an information scientist at the University of California, Berkeley (UCB), is always on the hunt for data he can use to map wealth and poverty, especially in countries that do not conduct regular censuses. “If you’re trying to design policy or do anything to improve living conditions, you generally need data to figure out where to go, to figure out who to help, even to figure out if the things you’re doing are making a difference.”

In a 2015 study, he used records from mobile-phone companies to map Rwanda’s wealth distribution (J. Blumenstock et al. Science 350, 1073–1076; 2015). But to track wealth distribution worldwide, patching together data-sharing agreements with hundreds of these companies would have been impractical. Another potential information source — high-resolution commercial satellite imagery — could have cost him upwards of US$10,000 for data from just one country….

Use of commercial images can also be restricted. Scientists are free to share or publish most government data or data they have collected themselves. But they are typically limited to publishing only the results of studies of commercial data, and at most a limited number of illustrative images.

Many researchers are moving towards a hybrid approach, combining public and commercial data, and running analyses locally or in the cloud, depending on need. Weiss still uses his tried-and-tested ArcGIS software from Esri for studies of small regions, and jumps to Earth Engine for global analyses.

The new offerings herald a shift from an era when scientists had to spend much of their time gathering and preparing data to one in which they’re thinking about how to use them. “Data isn’t an issue any more,” says Roy. “The next generation is going to be about what kinds of questions are we going to be able to ask?”…(More)”.

New Technologies Won’t Reduce Scarcity, but Here’s Something That Might


Vasilis Kostakis and Andreas Roos at the Harvard Business Review: “In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with a competitive spirit. But Lieberman and Fry have some good news: modern technology can address the root of the problem. They believe that we compete when there is scarcity, and that recent technological advances, such as 3D printing and artificial intelligence, will end widespread scarcity. Thus, a post-scarcity world, premised on cooperation, would emerge.

But can we really end scarcity?

We believe that the post-scarcity vision of the future is problematic because it reflects an understanding of technology and the economy that could worsen the problems it seeks to address. This is the bad news. Here’s why:

New technologies come to consumers as finished products that can be exchanged for money. What consumers often don’t understand is that the monetary exchange hides the fact that many of these technologies exist at the expense of other humans and local environments elsewhere in the global economy….

The good news is that there are alternatives. The wide availability of networked computers has allowed new community-driven and open-source business models to emerge. For example, consider Wikipedia, a free and open encyclopedia that has displaced the Encyclopedia Britannica and Microsoft Encarta. Wikipedia is produced and maintained by a community of dispersed enthusiasts primarily driven by other motives than profit maximization.  Furthermore, in the realm of software, see the case of GNU/Linux on which the top 500 supercomputers and the majority of websites run, or the example of the Apache Web Server, the leading software in the web-server market. Wikipedia, Apache and GNU/Linux demonstrate how non-coercive cooperation around globally-shared resources (i.e. a commons) can produce artifacts as innovative, if not more, as those produced by industrial capitalism.

In the same way, the emergence of networked micro-factories are giving rise to new open-source business models in the realm of design and manufacturing. Such spaces can either be makerspaces, fab labs, or other co-working spaces, equipped with local manufacturing technologies, such as 3D printing and CNC machines or traditional low-tech tools and crafts. Moreover, such spaces often offer collaborative environments where people can meet in person, socialize and co-create.

This is the context in which a new mode of production is emerging. This mode builds on the confluence of the digital commons of knowledge, software, and design with local manufacturing technologies.  It can be codified as “design global, manufacture local” following the logic that what is light (knowledge, design) becomes global, while what is heavy (machinery) is local, and ideally shared. Design global, manufacture local (DGML) demonstrates how a technology project can leverage the digital commons to engage the global community in its development, celebrating new forms of cooperation. Unlike large-scale industrial manufacturing, the DGML model emphasizes application that is small-scale, decentralized, resilient, and locally controlled. DGML could recognize the scarcities posed by finite resources and organize material activities accordingly. First, it minimizes the need to ship materials over long distances, because a considerable part of the manufacturing takes place locally. Local manufacturing also makes maintenance easier, and also encourages manufacturers to design products to last as long as possible. Last, DGML optimizes the sharing of knowledge and design as there are no patent costs to pay for….(More)”

Crowdsourcing as a Platform for Digital Labor Unions


Paper by Payal Arora and Linnea Holter Thompson in the International Journal of Communication: “Global complex supply chains have made it difficult to know the realities in factories. This structure obfuscates the networks, channels, and flows of communication between employers, workers, nongovernmental organizations and other vested intermediaries, creating a lack of transparency. Factories operate far from the brands themselves, often in developing countries where labor is cheap and regulations are weak. However, the emergence of social media and mobile technology has drawn the world closer together. Specifically, crowdsourcing is being used in an innovative way to gather feedback from outsourced laborers with access to digital platforms. This article examines how crowdsourcing platforms are used for both gathering and sharing information to foster accountability. We critically assess how these tools enable dialogue between brands and factory workers, making workers part of the greater conversation. We argue that although there are challenges in designing and implementing these new monitoring systems, these platforms can pave the path for new forms of unionization and corporate social responsibility beyond just rebranding…(More)”

Big Data against Child Obesity


European Commission: “Childhood and adolescent obesity is a major global and European public health problem. Currently, public actions are detached from local needs, mostly including indiscriminate blanket policies and single-element strategies, limiting their efficacy and effectiveness. The need for community-targeted actions has long been obvious, but the lack of monitoring and evaluation framework and the methodological inability to objectively quantify the local community characteristics, in a reasonable timeframe, has hindered that.

Graph showing BigO policy planner

Big Data based Platform

Technological achievements in mobile and wearable electronics and Big Data infrastructures allow the engagement of European citizens in the data collection process, allowing us to reshape policies at a regional, national and European level. In BigO, that will be facilitated through the development of a platform, allowing the quantification of behavioural community patterns through Big Data provided by wearables and eHealth- devices.

Estimate child obesity through community data

BigO has set detailed scientific, technological, validation and business objectives in order to be able to build a system that collects Big Data on children’s behaviour and helps planning health policies against obesity. In addition, during the project, BigO will reach out to more than 25.000 school and age-matched obese children and adolescents as sources for community data. Comprehensive models of the obesity prevalence dependence matrix will be created, allowing the data-driven effectiveness predictions about specific policies on a community and the real-time monitoring of the population response, supported by powerful real-time data visualisations….(More)

Data Governance in the Digital Age


Centre for International Governance Innovation: “Data is being hailed as “the new oil.” The analogy seems appropriate given the growing amount of data being collected, and the advances made in its gathering, storage, manipulation and use for commercial, social and political purposes.

Big data and its application in artificial intelligence, for example, promises to transform the way we live and work — and will generate considerable wealth in the process. But data’s transformative nature also raises important questions around how the benefits are shared, privacy, public security, openness and democracy, and the institutions that will govern the data revolution.

The delicate interplay between these considerations means that they have to be treated jointly, and at every level of the governance process, from local communities to the international arena. This series of essays by leading scholars and practitioners, which is also published as a special report, will explore topics including the rationale for a data strategy, the role of a data strategy for Canadian industries, and policy considerations for domestic and international data governance…

RATIONALE OF A DATA STRATEGY

THE ROLE OF A DATA STRATEGY FOR CANADIAN INDUSTRIES

BALANCING PRIVACY AND COMMERCIAL VALUES

DOMESTIC POLICY FOR DATA GOVERNANCE

INTERNATIONAL POLICY CONSIDERATIONS

EPILOGUE

Ten Reasons Not to Measure Impact—and What to Do Instead


Essay by Mary Kay Gugerty & Dean Karlan in the Stanford Social Innovation Review: “Good impact evaluations—those that answer policy-relevant questions with rigor—have improved development knowledge, policy, and practice. For example, the NGO Living Goods conducted a rigorous evaluation to measure the impact of its community health model based on door-to-door sales and promotions. The evidence of impact was strong: Their model generated a 27-percent reduction in child mortality. This evidence subsequently persuaded policy makers, replication partners, and major funders to support the rapid expansion of Living Goods’ reach to five million people. Meanwhile, rigorous evidence continues to further validate the model and help to make it work even better.

Of course, not all rigorous research offers such quick and rosy results. Consider the many studies required to discover a successful drug and the lengthy process of seeking regulatory approval and adoption by the healthcare system. The same holds true for fighting poverty: Innovations for Poverty Action (IPA), a research and policy nonprofit that promotes impact evaluations for finding solutions to global poverty, has conducted more than 650 randomized controlled trials (RCTs) since its inception in 2002. These studies have sometimes provided evidence about how best to use scarce resources (e.g., give away bed nets for free to fight malaria), as well as how to avoid wasting them (e.g., don’t expand traditional microcredit). But the vast majority of studies did not paint a clear picture that led to immediate policy changes. Developing an evidence base is more like building a mosaic: Each individual piece does not make the picture, but bit by bit a picture becomes clearer and clearer.

How do these investments in evidence pay off? IPA estimated the benefits of its research by looking at its return on investment—the ratio of the benefit from the scale-up of the demonstrated large-scale successes divided by the total costs since IPA’s founding. The ratio was 74x—a huge result. But this is far from a precise measure of impact, since IPA cannot establish what would have happened had IPA never existed. (Yes, IPA recognizes the irony of advocating for RCTs while being unable to subject its own operations to that standard. Yet IPA’s approach is intellectually consistent: Many questions and circumstances do not call for RCTs.)

Even so, a simple thought exercise helps to demonstrate the potential payoff. IPA never works alone—all evaluations and policy engagements are conducted in partnership with academics and implementing organizations, and increasingly with governments. Moving from an idea to the research phase to policy takes multiple steps and actors, often over many years. But even if IPA deserves only 10 percent of the credit for the policy changes behind the benefits calculated above, the ratio of benefits to costs is still 7.4x. That is a solid return on investment.

Despite the demonstrated value of high-quality impact evaluations, a great deal of money and time has been wasted on poorly designed, poorly implemented, and poorly conceived impact evaluations. Perhaps some studies had too small of a sample or paid insufficient attention to establishing causality and quality data, and hence any results should be ignored; others perhaps failed to engage stakeholders appropriately, and as a consequence useful results were never put to use.

The push for more and more impact measurement can not only lead to poor studies and wasted money, but also distract and take resources from collecting data that can actually help improve the performance of an effort. To address these difficulties, we wrote a book, The Goldilocks Challenge, to help guide organizations in designing “right-fit” evidence strategies. The struggle to find the right fit in evidence resembles the predicament that Goldilocks faces in the classic children’s fable. Goldilocks, lost in the forest, finds an empty house with a large number of options: chairs, bowls of porridge, and beds of all sizes. She tries each but finds that most do not suit her: The porridge is too hot or too cold, the bed too hard or too soft—she struggles to find options that are “just right.” Like Goldilocks, the social sector has to navigate many choices and challenges to build monitoring and evaluation systems that fit their needs. Some will push for more and more data; others will not push for enough….(More)”.

The 2018 Atlas of Sustainable Development Goals: an all-new visual guide to data and development


World Bank Data Team: “We’re pleased to release the 2018 Atlas of Sustainable Development Goals. With over 180 maps and charts, the new publication shows the progress societies are making towards the 17 SDGs.

It’s filled with annotated data visualizations, which can be reproducibly built from source code and data. You can view the SDG Atlas onlinedownload the PDF publication (30Mb), and access the data and source code behind the figures.

This Atlas would not be possible without the efforts of statisticians and data scientists working in national and international agencies around the world. It is produced in collaboration with the professionals across the World Bank’s data and research groups, and our sectoral global practices.

Trends and analysis for the 17 SDGs

The Atlas draws on World Development Indicators, a database of over 1,400 indicators for more than 220 economies, many going back over 50 years. For example, the chapter on SDG4 includes data from the UNESCO Institute for Statistics on education and its impact around the world.

Throughout the Atlas, data are presented by country, region and income group and often disaggregated by sex, wealth and geography.

The Atlas also explores new data from scientists and researchers where standards for measuring SDG targets are still being developed. For example, the chapter on SDG14 features research led by Global Fishing Watch, published this year in Science. Their team has tracked over 70,000 industrial fishing vessels from 2012 to 2016, processed 22 billion automatic identification system messages to map and quantify fishing around the world….(More)”.