Paper by Maryam Farboodi, Dhruv Singal, Laura Veldkamp & Venky Venkateswaran: “How should an investor value financial data? The answer is complicated because it depends on the characteristics of all investors. We develop a sufficient statistics approach that uses equilibrium asset return moments to summarize all relevant information about others’ characteristics. It can value data that is public or private, about one or many assets, relevant for dividends or for sentiment. While different data types have different valuations, heterogeneous investors value the same data very differently, which suggests a low price elasticity for data demand. Heterogeneous investors’ data valuations are also affected very differentially by market illiquidity…(More)”.
Using Competitors’ Data – A Role for Competition Law? Some Thoughts on the Amazon Marketplace Case
Paper by Iga Malobecka: “Based on the Commission’s investigation into Amazon’s practices, the article analyses whether Amazon’s use of sensitive data from independent retailers who sell via its marketplace may raise anticompetitive concerns and, if so, how they should be tackled, in particular, whether competition law is the right tool to address these concerns. Amazon’s conduct, which is being investigated by the Commission, does not easily fit in with well-established theories of harm. Therefore, it is proposed to develop new theories of harm that would be specifically tailored to challenges of digital markets and online platforms’ business models. Amazon’s conduct could be regarded as a forced free-riding, predatory copying, abusive leveraging or self- preferencing. It is also argued that some of the competition concerns that may arise from the use of competitors’ data by online intermediation platforms such as Amazon could be more efficiently tackled by introducing a regulation, such as the Digital Markets Act…(More)”.
The GDPR effect: How data privacy regulation shaped firm performance globally
Paper by Carl Benedikt Frey and Giorgio Presidente: “…To measure companies’ exposure to GDPR, we exploit international input-output tables and compute the shares of output sold to EU markets for each country and 2-digit industry. We then construct a shift-share instrument interacting this share with a dummy variable taking the value one from 2018 onwards.
Based on this approach, we find both channels discussed above to be quantitatively important, though the cost channel consistently dominates. On average, across our full sample, companies targeting EU markets saw an 8% reduction in profits and a relatively modest 2% decrease in sales (Figure 1). This suggests that earlier studies, which have focused on online outcomes or proxies of sales, provide an incomplete picture since companies have primarily been adversely affected through surging compliance costs.
While systematic data on firms’ IT purchases are hard to come by, we can explore how companies developing digital technologies have responded to GDPR. Indeed, taking a closer look at some recent patent documents, we note that these include applications for technologies like a “system and method for providing general data protection regulation (GDPR) compliant hashing in blockchain ledgers”, which guarantees a user’s right to be forgotten. Another example is a ‘Data Consent Manager’, a computer-implemented method for managing consent for sharing data….
While the results reported above show that GDPR has reduced firm performance on average, they do not reveal how different types of firms have been affected. As is well-known, large companies have more technical and financial resources to comply with regulations (Brill 2011), invest more in lobbying (Bombardini 2008), and might be better placed to obtain consent for personal data processing from individual consumers (Goldfarb and Tucker 2011). For example, Facebook has reportedly hired some 1,000 engineers, managers, and lawyers globally in response to the new regulation. It also doubled its EU lobbying budget in 2017 on the previous year, when GDPR was announced. Indeed, according to LobbyFacts.eu, Google, Facebook and Apple now rank among the five biggest corporate spenders on lobbying in the EU, with annual budgets in excess of €3.5 million.
While these are significant costs that might reduce profits, the impact of the GDPR on the fortunes of big tech is ambiguous. As The New York Times writes, “Whether Europe’s tough approach is actually crimping the global tech giants is unclear… Amazon, Apple, Google and Facebook have continued to grow and add customers”. Indeed, by being better able to cope with the burdens of the regulation, these companies may have increased their market share at the expense of smaller companies (Johnson et al. 2020, Peukert et al. 2020). …(More)”.
Society won’t trust A.I. until business earns that trust
Article by François Candelon, Rodolphe Charme di Carlo and Steven D. Mills: “…The concept of a social license—which was born when the mining industry, and other resource extractors, faced opposition to projects worldwide—differs from the other rules governing A.I.’s use. Academics such as Leeora Black and John Morrison, in the book The Social License: How to Keep Your Organization Legitimate,define the social license as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.”
The social license isn’t a document like a government permit; it’s a form of acceptance that companies must gain through consistent and trustworthy behavior as well as stakeholder interactions. Thus, a social license for A.I. will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates.
Companies cannot award themselves social licenses; they will have to win them by proving they can be trusted. As Morrison argued in 2014, akin to the capability to dig a mine, the fact that an A.I.-powered solution is technologically feasible doesn’t mean that society will find its use morally and ethically acceptable. And losing the social license will have dire consequences, as natural resource companies, such as Shell and BP, have learned in the past…(More)”
Japan to pitch data-sharing framework to bolster Asia supply chains
Nikkei coverage: “The Japanese government is set to propose a scheme to promote data-sharing among companies in Asia to strengthen supply chains in the region, Nikkei has learned.
The Ministry of Economy, Trade and Industry (METI) hopes that a secure data-sharing framework like the one developed in Europe will enable companies in Asia to smoothly exchange data, such as inventory information on products and parts, as well as information on potential disruptions in procurement.
The ministry will propose the idea as a key part of Japan’s digital trade policy at an expert panel meeting on Friday. The meeting will propose a major review of industrial policy to emphasize digitization and a decarbonized economy.
It sees Europe’s efforts as a role model in terms of information-sharing. The European Union is building a data distribution infrastructure, Gaia-X, to let companies in the region share information on supply chains.
The goal is to counter the monopoly on data held by large technology companies in the U.S. and China. The EU is promoting the sharing of data by connecting different cloud services among companies. Under Gaia, companies can limit the scope of data disclosure and the use of data provided to others, based on the concept of data sovereignty.
The scheme envisioned by METI will also allow companies to decide what type of data they share and how much. The infrastructure will be developed on a regional basis, with the participation of various countries.
Google and China’s Alibaba Group Holding offer data-sharing services for supply chain, but the Japanese government is concerned that it will be difficult to protect Japanese companies’ industrial secrets unless it develops its own data infrastructure….(More)”
Rehashing the Past: Social Equity, Decentralized Apps & Web 3.0
Opening blog by Jeffrey R. Yost of new series on Blockchain and Society: “Blockchain is a powerful technology with roots three decades old in a 1991 paper on (immutable) timestamping of digital content. This paper, by Bellcore’s Stuart Haber and W. Scott Stornetta, along with key (in both senses) crypto research of a half dozen future Turing Awardees (Nobel of computer science–W. Diffie, M. Hellman, R. Rivest, A. Shamir, L. Adleman, S. Micali), and others, provided critical foundations for Bitcoin, blockchain, Non-Fungible Tokens (NFTs), and Decentralized Autonomous Organizations (DAOs). This initial and foundational blog post, of Blockchain and Society, seeks to address and analyze the history, sociology, and political economy of blockchain and cryptocurrency. Subsequent blogs will dive deeper into individual themes and topics on crypto’s sociocultural and political economy contexts….(More)”.
Consumer Reviews and Regulation: Evidence from NYC Restaurants
Paper by Chiara Farronato & Georgios Zervas: “We investigate the informativeness of hygiene signals in online reviews, and their effect on consumer choice and restaurant hygiene. We first extract signals of hygiene from Yelp. Among all dimensions that regulators monitor through mandated restaurant inspections, we find that reviews are more informative about hygiene dimensions that consumers directly experience – food temperature and pests – than other dimensions. Next, we find causal evidence that consumer demand is sensitive to these hygiene signals. We also find suggestive evidence that restaurants that are more exposed to Yelp are cleaner along dimensions for which online reviews are more informative…(More)”.
Guide for Policymakers on Making Transparency Meaningful
Report by CDT: “In 2020, the Minneapolis police used a unique kind of warrant to investigate vandalism of an AutoZone store during the protests over the murder of George Floyd by a police officer. This “geofence” warrant required Google to turn over data on all users within a certain geographic area around the store at a particular time — which would have included not only the vandal, but also protesters, bystanders, and journalists.
It was only several months later that the public learned of the warrant, because Google notified a user that his account information was subject to the warrant, and the user told reporters. And it was not until a year later — when Google first published a transparency report with data about geofence warrants — that the public learned the total number of geofence warrants Google receives from U.S. authorities and of a recent “explosion” in their use. New York lawmakers introduced a bill to forbid geofence warrants because of concerns they could be used to target protesters, and, in light of Google’s transparency report, some civil society organizations are calling for them to be banned, too.
Technology company transparency matters, as this example shows. Transparency about governmental and company practices that affect users’ speech, access to information, and privacy from government surveillance online help us understand and check the ways in which tech companies and governments wield power and impact people’s human rights.
Policymakers are increasingly proposing transparency measures as part of their efforts to regulate tech companies, both in the United States and around the world. But what exactly do we mean when we talk about transparency when it comes to technology companies like social networks, messaging services, and telecommunications firms? A new report from CDT, Making Transparency Meaningful: A Framework for Policymakers, maps and describes four distinct categories of technology company transparency:
- Transparency reports that provide aggregated data and qualitative information about moderation actions, disclosures, and other practices concerning user generated content and government surveillance;
- User notifications about government demands for their data and moderation of their content;
- Access to data held by intermediaries for independent researchers, public policy advocates, and journalists; and
- Public-facing analysis, assessments, and audits of technology company practices with respect to user speech and privacy from government surveillance.
Different forms of transparency are useful for different purposes or audiences, and they also give rise to varying technical, legal, and practical challenges. Making Transparency Meaningful is designed to help policymakers and advocates understand the potential benefits and tradeoffs that come with each form of transparency. This report addresses key questions raised by proposed legislation in the United States and Europe that seeks to mandate one or more of these types of transparency and thereby hold tech companies and governments more accountable….(More)”.
The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence
Paper by Erik Brynjolfsson: “In 1950, Alan Turing proposed an “imitation game” as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions are indistinguishable from those of a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.
But not all types of AI are human-like—in fact, many of the most powerful systems are very different from humans —and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers…(More)”
Artificial intelligence searches for the human touch
Madhumita Murgia at the Financial Times: “For many outside the tech world, “data” means soulless numbers. Perhaps it causes their eyes to glaze over with boredom. Whereas for computer scientists, data means rows upon rows of rich raw matter, there to be manipulated.
Yet the siren call of “big data” has been more muted recently. There is a dawning recognition that, in tech such as artificial intelligence, “data” equals human beings.
AI-driven algorithms are increasingly impinging upon our everyday lives. They assist in making decisions across a spectrum that ranges from advertising products to diagnosing medical conditions. It’s already clear that the impact of such systems cannot be understood simply by examining the underlying code or even the data used to build them. We must look to people for answers as well.
Two recent studies do exactly that. The first is an Ipsos Mori survey of more than 19,000 people across 28 countries on public attitudes to AI, the second a University of Tokyo study investigating Japanese people’s views on the morals and ethics of AI usage. By inviting those with lived experiences to participate, both capture the mood among those researching the impact of artificial intelligence.
The Ipsos Mori survey found that 60 per cent of adults expect that products and services using AI will profoundly change their daily lives in the next three to five years. Latin Americans in particular think AI will trigger changes in social needs such as education and employment, while Chinese respondents were most likely to believe it would change transportation and their homes.
The geographic and demographic differences in both surveys are revealing. Globally, about half said AI technology has more benefits than drawbacks, while two-thirds felt gloomy about its impact on their individual freedom and legal rights. But figures for different countries show a significant split within this. Citizens from the “global south”, a catch-all term for non-western countries, were much more likely to “have a positive outlook on the impact of AI-powered products and services in their lives”. Large majorities in China (76 per cent) and India (68 per cent) said they trusted AI companies. In contrast, only 35 per cent in the UK, France and US expressed similar trust.
In the University of Tokyo study, researchers discovered that women, older people and those with more subject knowledge were most wary of the risks of AI, perhaps an indicator of their own experiences with these systems. The Japanese mathematician Noriko Arai has, for instance, written about sexist and gender stereotypes encoded into “female” carer and receptionist robots in Japan.
The surveys underline the importance of AI designers recognising that we don’t all belong to one homogenous population, with the same understanding of the world. But they’re less insightful about why differences exist….(More)”.