Toolkit to Help Community Leaders Drive Sustainable, Inclusive Growth


The Mastercard Center for Inclusive Growth: “… is unveiling a groundbreaking suite of tools that will provide local leaders with timely data-driven insights on the current state of and potential for inclusive growth in their communities. The announcement comes as private and public sector leaders gather in Washington for the inaugural Global Inclusive Growth Summit.

For the first time the new Inclusive Growth Toolkit brings together a clear, simple view of social and economic growth in underserved communities across the U.S., at the census-tract level. This was created in response to growing demand from community leaders for more evidence-based insights, to help them steer impact investment dollars to locally-led economic development initiatives, unlock the potential of neighborhoods, and improve quality of life for all.    

The initial design of the toolkit is focused on driving sustainable growth for the 37+ million people living in the 8700+ QOZs throughout the United States. This comprehensive picture reveals that neighborhoods can look very different and may require different types of interventions to achieve successful and sustainable growth.

The Inclusive Growth Toolkit includes:

  • The Inclusive Growth Score – an interactive online map where users can view measures of inclusion and growth and then download a PDF Scorecard for any of the QOZs at census tract level.

A deep-dive analytics consultancy service that provides community leaders with customized insights to inform policy decisions, prospectus development, and impact investor discussions….(More)”.

The Economics of Artificial Intelligence


Book edited by Ajay Agrawal, Joshua Gans and Avi Goldfarb: “Advances in artificial intelligence (AI) highlight the potential of this technology to affect productivity, growth, inequality, market power, innovation, and employment. This volume seeks to set the agenda for economic research on the impact of AI.

It covers four broad themes: AI as a general purpose technology; the relationships between AI, growth, jobs, and inequality; regulatory responses to changes brought on by AI; and the effects of AI on the way economic research is conducted. It explores the economic influence of machine learning, the branch of computational statistics that has driven much of the recent excitement around AI, as well as the economic impact of robotics and automation and the potential economic consequences of a still-hypothetical artificial general intelligence. The volume provides frameworks for understanding the economic impact of AI and identifies a number of open research questions…. (More)”

Digital dystopia: how algorithms punish the poor


Ed Pilkington at The Guardian: “All around the world, from small-town Illinois in the US to Rochdale in England, from Perth, Australia, to Dumka in northern India, a revolution is under way in how governments treat the poor.

You can’t see it happening, and may have heard nothing about it. It’s being planned by engineers and coders behind closed doors, in secure government locations far from public view.

Only mathematicians and computer scientists fully understand the sea change, powered as it is by artificial intelligence (AI), predictive algorithms, risk modeling and biometrics. But if you are one of the millions of vulnerable people at the receiving end of the radical reshaping of welfare benefits, you know it is real and that its consequences can be serious – even deadly.

The Guardian has spent the past three months investigating how billions are being poured into AI innovations that are explosively recasting how low-income people interact with the state. Together, our reporters in the US, Britain, India and Australia have explored what amounts to the birth of the digital welfare state.

Their dispatches reveal how unemployment benefits, child support, housing and food subsidies and much more are being scrambled online. Vast sums are being spent by governments across the industrialized and developing worlds on automating poverty and in the process, turning the needs of vulnerable citizens into numbers, replacing the judgment of human caseworkers with the cold, bloodless decision-making of machines.

At its most forbidding, Guardian reporters paint a picture of a 21st-century Dickensian dystopia that is taking shape with breakneck speed…(More)”.

The Economics of Social Data: An Introduction


Paper by Dirk Bergemann and Alessandro Bonatti: “Large internet platforms collect data from individual users in almost every interaction on the internet. Whenever an individual browses a news website, searches for a medical term or for a travel recommendation, or simply checks the weather forecast on an app, that individual generates data. A central feature of the data collected from the individuals is its social aspect. Namely, the data captured from an individual user is not only informative about this specific individual, but also about users in some metric similar to the individual. Thus, the individual data is really social data. The social nature of the data generates an informational externality that we investigate in this note….(More)”.

Digital Media and Wireless Communication in Developing Nations: Agriculture, Education, and the Economic Sector


Book by Megh R. Goyal and Emmanuel Eilu: “… explores how digital media and wireless communication, especially mobile phones and social media platforms, offer concrete opportunities for developing countries to transform different sectors of their economies. The volume focuses on the agricultural, economic, and education sectors. The chapter authors, mostly from Africa and India, provide a wealth of information on recent innovations, the opportunities they provide, challenges faced, and the direction of future research in digital media and wireless communication to leverage transformation in developing countries….(More)”.

Data-Sharing in IoT Ecosystems From a Competition Law Perspective: The Example of Connected Cars


Paper by Wolfgang Kerber: “…analyses whether competition law can help to solve problems of access to data and interoperability in IoT ecosystems, where often one firm has exclusive control of the data produced by a smart device (and of the technical access to this device). Such a gatekeeper position can lead to the elimination of competition for aftermarket and other complementary services in such IoT ecosystems. This problem is analysed both from an economic and a legal perspective, and also generally for IoT ecosystems as well as for the much discussed problems of “access to in-vehicle data and re-sources” in connected cars, where the “extended vehicle” concept of the car manufacturers leads to such positions of exclusive control. The paper analyses, in particular, the competition rules about abusive behavior of dominant firms (Art. 102 TFEU) and of firms with “relative market power” (§ 20 (1) GWB) in German competition law. These provisions might offer (if appropriately applied and amended) at least some solutions for these data access problems. Competition law, however, might not be sufficient for dealing with all or most of these problems, i.e. that also additional solutions might be needed (data portability, direct data (access) rights, or sector-specific regulation)….(More)”.

JPMorgan Creates ‘Volfefe’ Index to Track Trump Tweet Impact


Tracy Alloway at Bloomberg: “Two of the largest Wall Street banks are trying to measure the market impact of Donald Trump’s tweets.

Analysts at JPMorgan Chase & Co. have created an index to quantify what they say are the growing effects on U.S. bond yields. Citigroup Inc.’s foreign exchange team, meanwhile, report that these micro-blogging missives are also becoming “increasingly relevant” to foreign-exchange moves.

JPMorgan’s “Volfefe Index,” named after Trump’s mysterious covfefe tweet from May 2017, suggests that the president’s electronic musings are having a statistically significant impact on Treasury yields. The number of market-moving Trump tweets has ballooned in the past month, with those including words such as “China,” “billion,” “products,” “Democrats” and “great” most likely to affect prices, the analysts found….

JPMorgan’s analysis looked at Treasury yields in the five minutes after a Trump tweet, and the index shows the rolling one-month probability that each missive is market-moving.

They found that the Volfefe Index can account for a “measurable fraction” of moves in implied volatility, seen in interest rate derivatives known as swaptions. That’s particularly apparent at the shorter end of the curve, with two- and five-year rates more impacted than 10-year securities.

Meanwhile, Citi’s work shows that the president’s tweets are generally followed by a stretch of higher volatility across global currency markets. And there’s little sign traders are growing numb to these messages….(More)”

Fostering an Enabling Policy and Regulatory Environment in APEC for Data-Utilizing Businesses


APEC: “The objectives of this study is to better understand: 1) how firms from different sectors use data in their business models; and considering the significant increase in data-related policies and regulations enacted by governments across the world, 2) how such policies and regulations are affecting their use of data and hence business models. The study also tries: 3) to identify some of the middle-ground approaches that would enable governments to achieve public policy objectives, such as data security and privacy, and at the same time, also promote the growth of data-utilizing businesses. 39 firms from 12 economies have participated in this project and they come from a diverse group of industries, including aviation, logistics, shipping, payment services, encryption services, and manufacturing. The synthesis report can be found in Chapter 1 while the case study chapters can be found in Chapter 2 to 10….(More)”.

Sharing Private Data for Public Good


Stefaan G. Verhulst at Project Syndicate: “After Hurricane Katrina struck New Orleans in 2005, the direct-mail marketing company Valassis shared its database with emergency agencies and volunteers to help improve aid delivery. In Santiago, Chile, analysts from Universidad del Desarrollo, ISI Foundation, UNICEF, and the GovLab collaborated with Telefónica, the city’s largest mobile operator, to study gender-based mobility patterns in order to design a more equitable transportation policy. And as part of the Yale University Open Data Access project, health-care companies Johnson & Johnson, Medtronic, and SI-BONE give researchers access to previously walled-off data from 333 clinical trials, opening the door to possible new innovations in medicine.

These are just three examples of “data collaboratives,” an emerging form of partnership in which participants exchange data for the public good. Such tie-ups typically involve public bodies using data from corporations and other private-sector entities to benefit society. But data collaboratives can help companies, too – pharmaceutical firms share data on biomarkers to accelerate their own drug-research efforts, for example. Data-sharing initiatives also have huge potential to improve artificial intelligence (AI). But they must be designed responsibly and take data-privacy concerns into account.

Understanding the societal and business case for data collaboratives, as well as the forms they can take, is critical to gaining a deeper appreciation the potential and limitations of such ventures. The GovLab has identified over 150 data collaboratives spanning continents and sectors; they include companies such as Air FranceZillow, and Facebook. Our research suggests that such partnerships can create value in three main ways….(More)”.

Companies Collect a Lot of Data, But How Much Do They Actually Use?


Article by Priceonomics Data Studio: “For all the talk of how data is the new oil and the most valuable resource of any enterprise, there is a deep dark secret companies are reluctant to share — most of the data collected by businesses simply goes unused.

This unknown and unused data, known as dark data comprises more than half the data collected by companies. Given that some estimates indicate that 7.5 septillion (7,700,000,000,000,000,000,000) gigabytes of data are generated every single day, not using  most of it is a considerable issue.

In this article, we’ll look at this dark data. Just how much of it is created by companies, what are the reasons this data isn’t being analyzed, and what are the costs and implications of companies not using the majority of the data they collect.  

Before diving into the analysis, it’s worth spending a moment clarifying what we mean by the term “dark data.” Gartner defines dark data as:

“The information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing). 

To learn more about this phenomenon, Splunk commissioned a global survey of 1,300+ business leaders to better understand how much data they collect, and how much is dark. Respondents were from IT and business roles, and were located in Australia, China, France, Germany, Japan, the United States, and the United Kingdom. across various industries. For the report, Splunk defines dark data as: “all the unknown and untapped data across an organization, generated by systems, devices and interactions.”

While the costs of storing data has decreased overtime, the cost of saving septillions of gigabytes of wasted data is still significant. What’s more, during this time the strategic importance of data has increased as companies have found more and more uses for it. Given the cost of storage and the value of data, why does so much of it go unused?

The following chart shows the reasons why dark data isn’t currently being harnessed:

By a large margin, the number one reason given for not using dark data is that companies lack a tool to capture or analyze the data. Companies accumulate data from server logs, GPS networks, security tools, call records, web traffic and more. Companies track everything from digital transactions to the temperature of their server rooms to the contents of retail shelves. Most of this data lies in separate systems, is unstructured, and cannot be connected or analyzed.

Second, the data captured just isn’t good enough. You might have important customer information about a transaction, but it’s missing location or other important metadata because that information sits somewhere else or was never captured in useable format.

Additionally, dark data exists because there is simply too much data out there and a lot of is unstructured. The larger the dataset (or the less structured it is), the more sophisticated the tool required for analysis. Additionally, these kinds of datasets often time require analysis by individuals with significant data science expertise who are often is short supply

The implications of the prevalence are vast. As a result of the data deluge, companies often don’t know where all the sensitive data is stored and can’t be confident they are complying with consumer data protection measures like GDPR. …(More)”.