Data Collaboration, Pooling and Hoarding under Competition Law


Paper by Bjorn Lundqvist: “In the Internet of Things era devices will monitor and collect data, whilst device producing firms will store, distribute, analyse and re-use data on a grand scale. Great deal of data analytics will be used to enable firms to understand and make use of the collected data. The infrastructure around the collected data is controlled and access to the data flow is thus restricted on technical, but also on legal grounds. Legally, the data are being obscured behind a thicket of property rights, including intellectual property rights. Therefore, there is no general “data commons” for everyone to enjoy.

If firms would like to combine data, they need to give each other access either by sharing, trading, or pooling the data. On the one hand, industry-wide pooling of data could increase efficiency of certain services, and contribute to the innovation of other services, e.g., think about self-driven cars or personalized medicine. On the other hand, firms combining business data may use the data, not to advance their services or products, but to collude, to exclude competitors or to abuse their market position. Indeed by combining their data in a pool, they can gain market power, and, hence, the ability to violate competition law. Moreover, we also see firms hoarding data from various source creating de facto data pools. This article will discuss what implications combining data in data pools by firms might have on competition, and when competition law should be applicable. It develops the idea that data pools harbour great opportunities, whilst acknowledging that there are still risks to take into consideration, and to regulate….(More)”.

Recalculating GDP for the Facebook age


Gillian Tett at the Financial Times: How big is the impact of Facebook on our lives? That question has caused plenty of hand-wringing this year, as revelations have tumbled out about the political influence of Big Tech companies.

Economists are attempting to look at this question too — but in a different way. They have been quietly trying to calculate the impact of Facebook on gross domestic product data, ie to measure what our social-media addiction is doing to economic output….

Kevin Fox, an Australian economist, thinks there is. Working with four other economists, including Erik Brynjolfsson, a professor at MIT, he recently surveyed consumers to see what they would “pay” for Facebook in monetary terms, concluding conservatively that this was about $42 a month. Extrapolating this to the wider economy, he then calculated that the “value” of the social-media platform is equivalent to 0.11 per cent of US GDP. That might not sound transformational. But this week Fox presented the group’s findings at an IMF conference on the digital economy in Washington DC and argued that if Facebook activity had been counted as output in the GDP data, it would have raised the annual average US growth rate from 1.83 per cent to 1.91 per cent between 2003 and 2017. The number would rise further if you included other platforms – researchers believe that “maps” and WhatsApp are particularly important – or other services.  Take photographs.

Back in 2000, as the group points out, about 80 billion photos were taken each year at a cost of 50 cents a picture in camera and processing fees. This was recorded in GDP. Today, 1.6 trillion photos are taken each year, mostly on smartphones, for “free”, and excluded from that GDP data. What would happen if that was measured too, along with other types of digital services?

The bad news is that there is no consensus among economists on this point, and the debate is still at a very early stage. … A separate paper from Charles Hulten and Leonard Nakamura, economists at the University of Maryland and Philadelphia Fed respectively, explained another idea: a measurement known as “EGDP” or “Expanded GDP”, which incorporates “welfare” contributions from digital services. “The changes wrought by the digital revolution require changes to official statistics,” they said.

Yet another paper from Nakamura, co-written with Diane Coyle of Cambridge University, argued that we should also reconfigure the data to measure how we “spend” our time, rather than “just” how we spend our money. “To recapture welfare in the age of digitalisation, we need shadow prices, particularly of time,” they said. Meanwhile, US government number-crunchers have been trying to measure the value of “free” open-source software, such as R, Python, Julia and Java Script, concluding that if captured in statistics these would be worth about $3bn a year. Another team of government statisticians has been trying to value the data held by companies – this estimates, using one method, that Amazon’s data is currently worth $125bn, with a 35 per cent annual growth rate, while Google’s is worth $48bn, growing at 22 per cent each year. It is unlikely that these numbers – and methodologies – will become mainstream any time soon….(More)”.

Driven to safety — it’s time to pool our data


Kevin Guo at TechCrunch: “…Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together….

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition….(More)”

The Blockchain and the New Architecture of Trust


The Blockchain and the New Architecture of Trust

Book by Kevin Werbach: “The blockchain entered the world on January 3, 2009, introducing an innovative new trust architecture: an environment in which users trust a system—for example, a shared ledger of information—without necessarily trusting any of its components. The cryptocurrency Bitcoin is the most famous implementation of the blockchain, but hundreds of other companies have been founded and billions of dollars invested in similar applications since Bitcoin’s launch. Some see the blockchain as offering more opportunities for criminal behavior than benefits to society. In this book, Kevin Werbach shows how a technology resting on foundations of mutual mistrust can become trustworthy.

The blockchain, built on open software and decentralized foundations that allow anyone to participate, seems like a threat to any form of regulation. In fact, Werbach argues, law and the blockchain need each other. Blockchain systems that ignore law and governance are likely to fail, or to become outlaw technologies irrelevant to the mainstream economy. That, Werbach cautions, would be a tragic waste of potential. If, however, we recognize the blockchain as a kind of legal technology that shapes behavior in new ways, it can be harnessed to create tremendous business and social value….(More)”

Tricky Design: The Ethics of Things


Book edited by Tom Fisher and Lorraine Gamman: “Tricky Things responds to the burgeoning of scholarly interest in the cultural meanings of objects, by addressing the moral complexity of certain designed objects and systems.

The volume brings together leading international designers, scholars and critics to explore some of the ways in which the practice of design and its outcomes can have a dark side, even when the intention is to design for the public good. Considering a range of designed objects and relationships, including guns, eyewear, assisted suicide kits, anti-rape devices, passports and prisons, the contributors offer a view of design as both progressive and problematic, able to propose new material and human relationships, yet also constrained by social norms and ideology. 

This contradictory, tricky quality of design is explored in the editors’ introduction, which positions the objects, systems, services and ‘things’ discussed in the book in relation to the idea of the trickster that occurs in anthropological literature, as well as in classical thought, discussing design interventions that have positive and negative ethical consequences. These will include objects, both material and ‘immaterial’, systems with both local and global scope, and also different processes of designing. 

This important new volume brings a fresh perspective to the complex nature of ‘things‘, and makes a truly original contribution to debates in design ethics, design philosophy and material culture….(More)”

Declaration of Cities Coalition for Digital Rights


New York City, Barcelona and Amsterdam: “We, the undersigned cities, formally come together to form the Cities Coalition for Digital Rights, to protect and uphold human rights on the internet at the local and global level.

The internet has become inseparable from our daily lives. Yet, every day, there are new cases of digital rights abuse, misuse and misinformation and concentration of power around the world: freedom of expression being censored; personal information, including our movements and communications, monitored, being shared and sold without consent; ‘black box’ algorithms being used to make unaccountable decisions; social media being used as a tool of harassment and hate speech; and democratic processes and public opinion being undermined.

As cities, the closest democratic institutions to the people, we are committed to eliminating impediments to harnessing technological opportunities that improve the lives of our constituents, and to providing trustworthy and secure digital services and infrastructures that support our communities. We strongly believe that human rights principles such as privacy, freedom of expression, and democracy must be incorporated by design into digital platforms starting with locally-controlled digital infrastructures and services.

As a coalition, and with the support of the United Nations Human Settlements Program (UN-Habitat), we will share best practices, learn from each other’s challenges and successes, and coordinate common initiatives and actions. Inspired by the Internet Rights and Principles Coalition (IRPC), the work of 300 international stakeholders over the past ten years, we are committed to the following five evolving principles:

01.Universal and equal access to the internet, and digital literacy

02.Privacy, data protection and security

03.Transparency, accountability, and non-discrimination of data, content and algorithms

04.Participatory Democracy, diversity and inclusion

05.Open and ethical digital service standards”

Governments fail to capitalise on swaths of open data


Valentina Romei in the Financial Times: “…Behind the push for open data is a desire to make governments more transparent, accountable and efficient — but also to allow businesses to create products and services that spark economic development. The global annual opportunity cost of failing to do this effectively is about $5tn, according to one estimate from McKinsey, the consultancy.

The UK is not the only country falling short, says the Open Data Barometer, which monitors the status of government data across the world. Among the 30 leading governments — those that have championed the open data movement and have made progress over five years — “less than a quarter of the data with the biggest potential for social and economic impact” is truly open. This goal of transparency, it seems, has not proved sufficient for “creating value” — the movement’s latest focus. In 2015, nearly a decade after advocates first discussed the principles of open government data, 62 countries adopted the six Open Data Charter principles — which called for data to be open by default, usable and comparable….

The use of open data has already bore fruit for some countries. In 2015, Japan’s ministry of land, infrastructure and transport set up an open data site aimed at disabled and elderly people. The 7,000 data points published are downloadable and the service can be used to generate a map that shows which passenger terminals on train, bus and ferry networksprovide barrier-free access.

In the US, The Climate Corporation, a digital agriculture company, combined 30 years of weather data and 60 years of crop yield data to help farmers increase their productivity. And in the UK, subscription service Land Insight merges different sources of land data to help individuals and developers compare property information, forecast selling prices, contact land owners and track planning applications…
Open Data 500, an international network of organisations that studies the use and impact of open data, reveals that private companies in South Korea are using government agency data, with technology, advertising and business services among the biggest users. It shows, for example, that Archidraw, a four-year-old Seoul-based company that provides 3D visualisation tools for interior design and property remodelling, has used mapping data from the Ministry of Land, Infrastructure and Transport…(More)”.

A Behavioral Economics Approach to Digitalisation


Paper by Dirk Beerbaum and Julia M. Puaschunder: “A growing body of academic research in the field of behavioural economics, political science and psychology demonstrate how an invisible hand can nudge people’s decisions towards a preferred option. Contrary to the assumptions of the neoclassical economics, supporters of nudging argue that people have problems coping with a complex world, because of their limited knowledge and their restricted rationality. Technological improvement in the age of information has increased the possibilities to control the innocent social media users or penalise private investors and reap the benefits of their existence in hidden persuasion and discrimination. Nudging enables nudgers to plunder the simple uneducated and uninformed citizen and investor, who is neither aware of the nudging strategies nor able to oversee the tactics used by the nudgers (Puaschunder 2017a, b; 2018a, b).

The nudgers are thereby legally protected by democratically assigned positions they hold. The law of motion of the nudging societies holds an unequal concentration of power of those who have access to compiled data and coding rules, relevant for political power and influencing the investor’s decision usefulness (Puaschunder 2017a, b; 2018a, b). This paper takes as a case the “transparency technology XBRL (eXtensible Business Reporting Language)” (Sunstein 2013, 20), which should make data more accessible as well as usable for private investors. It is part of the choice architecture on regulation by governments (Sunstein 2013). However, XBRL is bounded to a taxonomy (Piechocki and Felden 2007).

Considering theoretical literature and field research, a representation issue (Beerbaum, Piechocki and Weber 2017) for principles-based accounting taxonomies exists, which intelligent machines applying Artificial Intelligence (AI) (Mwilu, Prat and Comyn-Wattiau 2015) nudge to facilitate decision usefulness. This paper conceptualizes ethical questions arising from the taxonomy engineering based on machine learning systems: Should the objective of the coding rule be to support or to influence human decision making or rational artificiality? This paper therefore advocates for a democratisation of information, education and transparency about nudges and coding rules (Puaschunder 2017a, b; 2018a, b)…(More)”.

Governments fail to capitalise on swaths of open data


Valentina Romei in the Financial Times: “…Behind the push for open data is a desire to make governments more transparent, accountable and efficient — but also to allow businesses to create products and services that spark economic development. The global annual opportunity cost of failing to do this effectively is about $5tn, according to one estimate from McKinsey, the consultancy.

The UK is not the only country falling short, says the Open Data Barometer, which monitors the status of government data across the world. Among the 30 leading governments — those that have championed the open data movement and have made progress over five years — “less than a quarter of the data with the biggest potential for social and economic impact” is truly open. This goal of transparency, it seems, has not proved sufficient for “creating value” — the movement’s latest focus. In 2015, nearly a decade after advocates first discussed the principles of open government data, 62 countries adopted the six Open Data Charter principles — which called for data to be open by default, usable and comparable….

The use of open data has already bore fruit for some countries. In 2015, Japan’s ministry of land, infrastructure and transport set up an open data site aimed at disabled and elderly people. The 7,000 data points published are downloadable and the service can be used to generate a map that shows which passenger terminals on train, bus and ferry networksprovide barrier-free access.

In the US, The Climate Corporation, a digital agriculture company, combined 30 years of weather data and 60 years of crop yield data to help farmers increase their productivity. And in the UK, subscription service Land Insight merges different sources of land data to help individuals and developers compare property information, forecast selling prices, contact land owners and track planning applications…
Open Data 500, an international network of organisations that studies the use and impact of open data, reveals that private companies in South Korea are using government agency data, with technology, advertising and business services among the biggest users. It shows, for example, that Archidraw, a four-year-old Seoul-based company that provides 3D visualisation tools for interior design and property remodelling, has used mapping data from the Ministry of Land, Infrastructure and Transport…(More)”.

Better Ways to Communicate Hospital Data to Physicians


Scott FalkJohn Cherf and Julie Schulz at the Harvard Business Review: “We recently conducted an in-depth study at Lumere to gain insight into physicians’ perceptions of clinical variation and the factors influencing their choices of drugs and devices. Based on a survey of 276 physicians, our study results show that it’s necessary to consistently and frequently share cost data and clinical evidence with physicians, regardless of whether they’re affiliated with or directly employed by a hospital….

There are multiple explanations as to why health system administrators have been slow to share data with physicians. The two most common challenges are difficulty obtaining accurate, clinically meaningful data and lack of knowledge among administrators about communicating data.

When it comes to obtaining accurate, meaningful data, the reality is that many health systems do not know where to start. Between disparate data-collection systems, varied physician needs, and an overwhelming array of available clinical evidence, it can be daunting to try to develop a robust, yet streamlined, approach.

As for the second problem, many administrators have simply not been trained to effectively communicate data. Health system leaders tend to be more comfortable talking about costs, but physicians generally focus on clinical outcomes. As a result, physicians frequently have follow-up questions that administrators interpret as pushback. It is important to understand what physicians need.

Determine the appropriate amount and type of data to share. Using evidence and data can foster respectful debate, provide honest education, and ultimately align teams.

Physicians are driven by their desire to improve patient outcomes and therefore want the total picture. This includes access to published evidence to help choose cost-effective drug and device alternatives without hurting outcomes. Health system administrators need to provide clinicians with access to a wide range of data (not only data about costs). Ensuring that physicians have a strong voice in determining which data to share will help create alignment and trust. A more nuanced value-based approach that accounts for important clinical and patient-centered outcomes (e.g., length of stay, post-operative recovery profile) combined with cost data may be the most effective solution.

While physicians generally report wanting more cost data, not all physicians have the experience and training to appropriately incorporate it into their decision making. Surveyed physicians who have had exposure to a range of cost data, data highlighting clinical variation, and practice guidelines generally found cost data more influential in their selection of drugs and devices, regardless of whether they shared in savings under value-based care models. This was particularly true for more veteran physicians and those with private-practice experience who have had greater exposure to managing cost information.

Health systems can play a key role in helping physicians use cost and quality data to make cost-effective decisions. We recommend that health systems identify a centralized data/analytics department that includes representatives of both quality-improvement teams and technology/informatics to own the process of streamlining, analyzing, and disseminating data.

Compare data based on contemporary evidence-based guidelines. Physicians would like to incorporate reliable data into their decision-making when selecting drugs and devices. In our survey, 54% of respondents reported that it was either “extremely important” or “very important” that hospitals use peer-reviewed literature and clinical evidence to support the selection of medical devices. Further, 56% of respondents said it was “extremely important” or “very important” that physicians be involved in using data to develop clinical protocols, guidelines, and best practices….(More)”.