Ed Pilkington at The Guardian: “All around the world, from small-town Illinois in the US to Rochdale in England, from Perth, Australia, to Dumka in northern India, a revolution is under way in how governments treat the poor.
You can’t see it happening, and may have heard nothing about it. It’s being planned by engineers and coders behind closed doors, in secure government locations far from public view.
Only mathematicians and computer scientists fully understand the sea change, powered as it is by artificial intelligence (AI), predictive algorithms, risk modeling and biometrics. But if you are one of the millions of vulnerable people at the receiving end of the radical reshaping of welfare benefits, you know it is real and that its consequences can be serious – even deadly.
The Guardian has spent the past three months investigating how billions are being poured into AI innovations that are explosively recasting how low-income people interact with the state. Together, our reporters in the US, Britain, India and Australia have explored what amounts to the birth of the digital welfare state.
Their dispatches reveal how unemployment benefits, child support, housing and food subsidies and much more are being scrambled online. Vast sums are being spent by governments across the industrialized and developing worlds on automating poverty and in the process, turning the needs of vulnerable citizens into numbers, replacing the judgment of human caseworkers with the cold, bloodless decision-making of machines.
At its most forbidding, Guardian reporters paint a picture of a 21st-century Dickensian dystopia that is taking shape with breakneck speed…(More)”.
Paper by Dirk Bergemann and Alessandro Bonatti: “Large internet platforms collect data from individual users in almost every interaction on the internet. Whenever an individual browses a news website, searches for a medical term or for a travel recommendation, or simply checks the weather forecast on an app, that individual generates data. A central feature of the data collected from the individuals is its social aspect. Namely, the data captured from an individual user is not only informative about this specific individual, but also about users in some metric similar to the individual. Thus, the individual data is really social data. The social nature of the data generates an informational externality that we investigate in this note….(More)”.
Book by Megh R. Goyal and Emmanuel Eilu: “… explores how digital media and wireless communication, especially mobile phones and social media platforms, offer concrete opportunities for developing countries to transform different sectors of their economies. The volume focuses on the agricultural, economic, and education sectors. The chapter authors, mostly from Africa and India, provide a wealth of information on recent innovations, the opportunities they provide, challenges faced, and the direction of future research in digital media and wireless communication to leverage transformation in developing countries….(More)”.
Paper by Wolfgang Kerber: “…analyses whether competition law can help to solve problems of access to data and interoperability in IoT ecosystems, where often one firm has exclusive control of the data produced by a smart device (and of the technical access to this device). Such a gatekeeper position can lead to the elimination of competition for aftermarket and other complementary services in such IoT ecosystems. This problem is analysed both from an economic and a legal perspective, and also generally for IoT ecosystems as well as for the much discussed problems of “access to in-vehicle data and re-sources” in connected cars, where the “extended vehicle” concept of the car manufacturers leads to such positions of exclusive control. The paper analyses, in particular, the competition rules about abusive behavior of dominant firms (Art. 102 TFEU) and of firms with “relative market power” (§ 20 (1) GWB) in German competition law. These provisions might offer (if appropriately applied and amended) at least some solutions for these data access problems. Competition law, however, might not be sufficient for dealing with all or most of these problems, i.e. that also additional solutions might be needed (data portability, direct data (access) rights, or sector-specific regulation)….(More)”.
Tracy Alloway at Bloomberg: “Two of the largest Wall Street banks are trying to measure the market impact of Donald Trump’s tweets.
Analysts at JPMorgan Chase & Co. have created an index to quantify what they say are the growing effects on U.S. bond yields. Citigroup Inc.’s foreign exchange team, meanwhile, report that these micro-blogging missives are also becoming “increasingly relevant” to foreign-exchange moves.
JPMorgan’s “Volfefe Index,” named after Trump’s mysterious covfefetweet from May 2017, suggests that the president’s electronic musings are having a statistically significant impact on Treasury yields. The number of market-moving Trump tweets has ballooned in the past month, with those including words such as “China,” “billion,” “products,” “Democrats” and “great” most likely to affect prices, the analysts found….
JPMorgan’s analysis looked at Treasury yields in the five minutes after a Trump tweet, and the index shows the rolling one-month probability that each missive is market-moving.
They found that the Volfefe Index can account for a “measurable fraction” of moves in implied volatility, seen in interest rate derivatives known as swaptions. That’s particularly apparent at the shorter end of the curve, with two- and five-year rates more impacted than 10-year securities.
Meanwhile, Citi’s work shows that the president’s tweets are generally followed by a stretch of higher volatility across global currency markets. And there’s little sign traders are growing numb to these messages….(More)”
APEC: “The objectives of this study is to better understand: 1) how firms from different sectors use data in their business models; and considering the significant increase in data-related policies and regulations enacted by governments across the world, 2) how such policies and regulations are affecting their use of data and hence business models. The study also tries: 3) to identify some of the middle-ground approaches that would enable governments to achieve public policy objectives, such as data security and privacy, and at the same time, also promote the growth of data-utilizing businesses. 39 firms from 12 economies have participated in this project and they come from a diverse group of industries, including aviation, logistics, shipping, payment services, encryption services, and manufacturing. The synthesis report can be found in Chapter 1 while the case study chapters can be found in Chapter 2 to 10….(More)”.
Stefaan G. Verhulst at Project Syndicate: “After Hurricane Katrina struck New Orleans in 2005, the direct-mail marketing company Valassis shared its database with emergency agencies and volunteers to help improve aid delivery. In Santiago, Chile, analysts from Universidad del Desarrollo, ISI Foundation, UNICEF, and the GovLab collaborated with Telefónica, the city’s largest mobile operator, to study gender-based mobility patterns in order to design a more equitable transportation policy. And as part of the Yale University Open Data Access project, health-care companies Johnson & Johnson, Medtronic, and SI-BONE give researchers access to previously walled-off data from 333 clinical trials, opening the door to possible new innovations in medicine.
These are just three examples of “data collaboratives,” an emerging form of partnership in which participants exchange data for the public good. Such tie-ups typically involve public bodies using data from corporations and other private-sector entities to benefit society. But data collaboratives can help companies, too – pharmaceutical firms share data on biomarkers to accelerate their own drug-research efforts, for example. Data-sharing initiatives also have huge potential to improve artificial intelligence (AI). But they must be designed responsibly and take data-privacy concerns into account.
Understanding the societal and business case for data collaboratives, as well as the forms they can take, is critical to gaining a deeper appreciation the potential and limitations of such ventures. The GovLab has identified over 150 data collaboratives spanning continents and sectors; they include companies such as Air France, Zillow, and Facebook. Our research suggests that such partnerships can create value in three main ways….(More)”.
Article by Priceonomics Data Studio: “For all the talk of how data is the new oil and the most valuable resource of any enterprise, there is a deep dark secret companies are reluctant to share — most of the data collected by businesses simply goes unused.
This unknown and unused data, known as dark data comprises more than half the data collected by companies. Given that some estimates indicate that 7.5 septillion (7,700,000,000,000,000,000,000) gigabytes of data are generated every single day, not using most of it is a considerable issue.
In this article, we’ll look at this dark data. Just how much of it is created by companies, what are the reasons this data isn’t being analyzed, and what are the costs and implications of companies not using the majority of the data they collect.
Before diving into the analysis, it’s worth spending a moment clarifying what we mean by the term “dark data.” Gartner defines dark data as:
“The information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing).
To learn more about this phenomenon, Splunk commissioned a global survey of 1,300+ business leaders to better understand how much data they collect, and how much is dark. Respondents were from IT and business roles, and were located in Australia, China, France, Germany, Japan, the United States, and the United Kingdom. across various industries. For the report, Splunk defines dark data as: “all the unknown and untapped data across an organization, generated by systems, devices and interactions.”
While the costs of storing data has decreased overtime, the cost of saving septillions of gigabytes of wasted data is still significant. What’s more, during this time the strategic importance of data has increased as companies have found more and more uses for it. Given the cost of storage and the value of data, why does so much of it go unused?
The following chart shows the reasons why dark data isn’t currently being harnessed:
By a large margin, the number one reason given for not using dark data is that companies lack a tool to capture or analyze the data. Companies accumulate data from server logs, GPS networks, security tools, call records, web traffic and more. Companies track everything from digital transactions to the temperature of their server rooms to the contents of retail shelves. Most of this data lies in separate systems, is unstructured, and cannot be connected or analyzed.
Second, the data captured just isn’t good enough. You might have important customer information about a transaction, but it’s missing location or other important metadata because that information sits somewhere else or was never captured in useable format.
Additionally, dark data exists because there is simply too much data out there and a lot of is unstructured. The larger the dataset (or the less structured it is), the more sophisticated the tool required for analysis. Additionally, these kinds of datasets often time require analysis by individuals with significant data science expertise who are often is short supply.
The implications of the prevalence are vast. As a result of the data deluge, companies often don’t know where all the sensitive data is stored and can’t be confident they are complying with consumer data protection measures like GDPR. …(More)”.
Proceedings edited by Alessandra Lazazzara, Francesca Ricciardi and Stefano Za: “The recent surge of interest in digital ecosystems is not only transforming the business landscape, but also poses several human and organizational challenges. Due to the pervasive effects of the transformation on firms and societies alike, both scholars and practitioners are interested in understanding the key mechanisms behind digital ecosystems, their emergence and evolution. In order to disentangle such factors, this book presents a collection of research papers focusing on the relationship between technologies (e.g. digital platforms, AI, infrastructure) and behaviours (e.g. digital learning, knowledge sharing, decision-making). Moreover, it provides critical insights into how digital ecosystems can shape value creation and benefit various stakeholders. The plurality of perspectives offered makes the book particularly relevant for users, companies, scientists and governments. The content is based on a selection of the best papers – original double-blind peer-reviewed contributions – presented at the annual conference of the Italian chapter of the AIS, which took place in Pavia, Italy in October 2018….(More)”.
Paper by Jaehyuk Park et al: “…One of the most popular concepts for policy makers and business economists to understand the structure of the global economy is “cluster”, the geographical agglomeration of interconnected firms such as Silicon Valley, Wall Street, and Hollywood. By studying those well-known clusters, we become to understand the advantage of participating in a geo-industrial cluster for firms and how it is related to the economic growth of a region.
However, the existing definition of geo-industrial cluster is not systematic enough to reveal the whole picture of the global economy. Often, after defining as a group of firms in a certain area, the geo-industrial clusters are considered as independent to each other. As we should consider the interaction between accounting team and marketing team to understand the organizational structure of a firm, the relationships among those geo-industrial clusters are the essential part of the whole picture….
In this new study, my colleagues and I at Indiana University — with support from LinkedIn — have finally overcome these limitations by defining geo-industrial clusters through labor flow and constructing a global labor flow network from LinkedIn’s individual-level job history dataset. Our access to this data was made possible by our selection as one of 11 teams selected to participate in the LinkedIn Economic Graph Challenge.
The transitioning of workers between jobs and firms — also known as labor flow — is considered central in driving firms towards geo-industrial clusters due to knowledge spillover and labor market pooling. In response, we mapped the cluster structure of the world economy based on labor mobility between firms during the last 25 years, constructing a “labor flow network.”
To do this, we leverage LinkedIn’s data on professional demographics and employment histories from more than 500 million people between 1990 and 2015. The network, which captures approximately 130 million job transitions between more than 4 million firms, is the first-ever flow network of global labor.
The resulting “map” allows us to:
identify geo-industrial clusters systematically and organically using network community detection
verify the importance of region and industry in labor mobility
compare the relative importance between the two constraints in different hierarchical levels, and
reveal the practical advantage of the geo-industrial cluster as a unit of future economic analyses.
show a better picture of what industry in what region leads the economic growth of the industry or the region, at the same time
find out emerging and declining skills based on the representativeness of them in growing and declining geo-industrial clusters…(More)”.