Open-Data: A Solution When Data Constitutes an Essential Facility?


Chapter by Claire Borsenberger, Mathilde Hoang and Denis Joram: “Thanks to appropriate data algorithms, firms, especially those on-line, are able to extract detailed knowledge about consumers and markets. This raises the question of the essential facility character of data. Moreover, the features of digital markets lead to a concentration of this core input in the hands of few big “superstars” and arouse legitimate economic and societal concerns. In a more and more data-driven society, one could ask if data openness is a solution to deal with power derived from data concentration. We conclude that only a case-by-case approach should be followed. Mandatory open data policy should be conditioned on an ex-ante cost-benefit analysis proving that the benefits of disclosure exceed its costs….(More)”.

Privacy concerns collide with the public interest in data


Gillian Tett in the Financial Times: “Late last year Statistics Canada — the agency that collects government figures — launched an innovation: it asked the country’s banks to supply “individual-level financial transactions data” for 500,000 customers to allow it to track economic trends. The agency argued this was designed to gather better figures for the public interest. However, it tipped the banks into a legal quandary. Under Canadian law (as in most western countries) companies are required to help StatsCan by supplying operating information. But data privacy laws in Canada also say that individual bank records are confidential. When the StatsCan request leaked out, it sparked an outcry — forcing the agency to freeze its plans. “It’s a mess,” a senior Canadian banker says, adding that the laws “seem contradictory”.

Corporate boards around the world should take note. In the past year, executive angst has exploded about the legal and reputational risks created when private customer data leak out, either by accident or in a cyber hack. Last year’s Facebook scandals have been a hot debating topic among chief executives at this week’s World Economic Forum in Davos, as has the EU’s General Data Protection Regulation. However, there is another important side to this Big Data debate: must companies provide private digital data to public bodies for statistical and policy purposes? Or to put it another way, it is time to widen the debate beyond emotive privacy issues to include the public interest and policy needs. The issue has received little public debate thus far, except in Canada. But it is becoming increasingly important.

Companies are sitting on a treasure trove of digital data that offers valuable real-time signals about economic activity. This information could be even more significant than existing statistics, because they struggle to capture how the economy is changing. Take Canada. StatsCan has hitherto tracked household consumption by following retail sales statistics, supplemented by telephone surveys. But consumers are becoming less willing to answer their phones, which undermines the accuracy of surveys, and consumption of digital services cannot be easily pursued. ...

But the biggest data collections sit inside private companies. Big groups know this, and some are trying to respond. Google has created its own measures to track inflation, which it makes publicly available. JPMorgan and other banks crunch customer data and publish reports about general economic and financial trends. Some tech groups are even starting to volunteer data to government bodies. LinkedIn has offered to provide anonymised data on education and employment to municipal and city bodies in America and beyond, to help them track local trends; the group says this is in the public interest for policy purposes, as “it offers a different perspective” than official data sources. But it is one thing for LinkedIn to offer anonymised data when customers have signed consent forms permitting the transfer of data; it is quite another for banks (or other companies) who have operated with strict privacy rules. If nothing else, the CanStat saga shows there urgently needs to be more public debate, and more clarity, around these rules. Consumer privacy issues matter (a lot). But as corporate data mountains grow, we will need to ask whether we want to live in a world where Amazon and Google — and Mastercard and JPMorgan — know more about economic trends than central banks or finance ministries. Personally, I would say “no”. But sooner or later politicians will need to decide on their priorities in this brave new Big Data world; the issue cannot be simply left to the half-hidden statisticians….(More)”.

This website can tell what kind of person you are based on where you live. See for yourself what your ZIP code says about you


Meira Geibel at Business Insider:

  • “Esri’s Tapestry technology includes a ZIP code look-up feature where you can see the top demographics, culture, and lifestyle choices in your area.
  • Each ZIP code shows a percentage breakdown of Esri’s 67 unique market-segment classifications with kitschy labels like “Trendsetters” and “Savvy Suburbanites.”
  • The data can be altered to show median age, population density, people with graduate and professional degrees, and the percentage of those who charge more than $1,000 to their credit cards monthly.

Where you live says a lot about you. While you’re not totally defined by where you go to sleep at night, you may have more in common with your neighbors than you think.

That’s according to Esri, a geographic-information firm based in California, which offers a “ZIP Lookup” feature. The tool breaks down the characteristics of the individuals in a given neighborhood by culture, lifestyle, and demographics based on data collected from the area.

The data is then sorted into 67 unique market-segment classifications that have rather kitschy titles like “Trendsetters” and “Savvy Suburbanites.”

You can try it for yourself: Just head to the website, type in your ZIP code, and you’ll be greeted with a breakdown of your ZIP code’s demographic characteristics….(More)”.

The Concept of the Corporation


John Kay: “For the past fifty years or so, the economic theory of the firm has been based on the paradigmatic model of corporate activity which perceives the firm as a nexus of contracts, its boundaries defined by the relative transaction costs of market-based and hierarchical organisation.  Issues of both corporate governance and corporate management are seen as principal-agent problems, to be resolved by the establishment of appropriate incentives.  This approach has had considerable influence on corporate behaviour and on public policy.  Business has placed ever-greater emphasis on ‘shareholder value’ and incentive-based schemes of executive remuneration have become widespread.

            In this paper, I describe the origins, development and effect of the ‘markets and hierarchies’ approach.  I argue that this reductionist account fails at a political level, giving no coherent account of the legitimacy of such corporate activity – that is, no answer to the question ‘what gives them the right to do that?’ – and additionally that the model bears little relation to the reality of successful corporations.  I describe an alternative tradition in the understanding of business, owing more to organisation theory, corporate strategy and business history, which treats the concept of corporate personality as more than a legal doctrine.  In this view, corporations are social organisations: their competitive advantage is based on distinctive capabilities which are the product of their history, their internal architecture and organisational design, and the relationships with employers, customers, suppliers and commentators at large which arise from them.  This is not just a more plausible account of what firms actually do: by recognising the social foundations of corporations, we are better placed to understand how and why corporations and their varied stakeholders succeed…(More)”

EU negotiators agree on new rules for sharing of public sector data


European Commission Press Release: “Negotiators from the European Parliament, the Council of the EU and the Commission have reached an agreement on a revised directive that will facilitate the availability and re-use of public sector data.

Data is the fuel that drives the growth of many digital products and services. Making sure that high-quality, high-value data from publicly funded services is widely and freely available is a key factor in accelerating European innovation in highly competitive fields such as artificial intelligence requiring access to vast amounts of high-quality data.

In full compliance with the EU General Data Protection Regulation, the new Directive on Open Data and Public Sector Information (PSI) – which can be for example anything from anonymised personal data on household energy use to general information about national education or literacy levels – updates the framework setting out the conditions under which public sector data should be made available for re-use, with a particular focus on the increasing amounts of high-value data that is now available.

Vice-President for the Digital Single Market Andrus Ansip said: “Data is increasingly the lifeblood of today’s economy and unlocking the potential of public open data can bring significant economic benefits. The total direct economic value of public sector information and data from public undertakings is expected to increase from €52 billion in 2018 to €194 billion by 2030. With these new rules in place, we will ensure that we can make the most of this growth” 

Commissioner for Digital Economy and Society Mariya Gabriel said: “Public sector information has already been paid for by the taxpayer. Making it more open for re-use benefits the European data economy by enabling new innovative products and services, for example based on artificial intelligence technologies. But beyond the economy, open data from the public sector is also important for our democracy and society because it increases transparency and supports a facts-based public debate.”

As part of the EU Open Data policy, rules are in place to encourage Member States to facilitate the re-use of data from the public sector with minimal or no legal, technical and financial constraints. But the digital world has changed dramatically since they were first introduced in 2003.

What do the new rules cover?

  • All public sector content that can be accessed under national access to documents rules is in principle freely available for re-use. Public sector bodies will not be able to charge more than the marginal cost for the re-use of their data, except in very limited cases. This will allow more SMEs and start-ups to enter new markets in providing data-based products and services.
  • A particular focus will be placed on high-value datasets such as statistics or geospatial data. These datasets have a high commercial potential, and can speed up the emergence of a wide variety of value-added information products and services.
  • Public service companies in the transport and utilities sector generate valuable data. The decision on whether or not their data has to be made available is covered by different national or European rules, but when their data is available for re-use, they will now be covered by the Open Data and Public Sector Information Directive. This means they will have to comply with the principles of the Directive and ensure the use of appropriate data formats and dissemination methods, while still being able to set reasonable charges to recover related costs.
  • Some public bodies strike complex data deals with private companies, which can potentially lead to public sector information being ‘locked in’. Safeguards will therefore be put in place to reinforce transparency and to limit the conclusion of agreements which could lead to exclusive re-use of public sector data by private partners.
  • More real-time data, available via Application Programming Interfaces (APIs), will allow companies, especially start-ups, to develop innovative products and services, e.g. mobility apps. Publicly-funded research data is also being brought into the scope of the directive: Member States will be required to develop policies for open access to publicly funded research data while harmonised rules on re-use will be applied to all publicly-funded research data which is made accessible via repositories….(More)”.

Looking after and using data for public benefit


Heather Savory at the Office for National Statistics (UK): “Official Statistics are for the benefit of society and the economy and help Britain to make better decisions. They allow the formulation of better public policy and the effective measurement of those policies. They inform the direction of economic and commercial activities. They provide valuable information for analysts, researchers, public and voluntary bodies. They enable the public to hold organisations that spend public money to account, thus informing democratic debate.

The ability to harness the power of data is critical in enabling official statistics to support the most important decisions facing the country.

Under the new powers in the Digital Economy Act , ONS can now gain access to new and different sources of data including ‘administrative’ data from government departments and commercial data. Alongside the availability of these new data sources ONS is experiencing a strong demand for ad hoc insights alongside our traditional statistics.

We need to deliver more, faster, finer-grained insights into the economy and society. We need to deliver high quality, trustworthy information, on a faster timescale, to help decision-making. We will increasingly develop innovative data analysis methods, for example using images to gain insight from the work we’ve recently announced on Urban Forests….

I should explain here that our data is not held in one big linked database; we’re architecting our Data Access Platform so that data can be linked in different ways for different purposes. This is designed to preserve data confidentiality, so only the necessary subset of data is accessible by authorised people, for a certain purpose. To avoid compromising their effectiveness, we do not make public the specific details of the security measures we have in place, but our recently tightened security regime, which is independently assured by trusted external bodies, includes:

  • physical measures to restrict who can access places where data is stored;
  • protective measures for all data-related IT services;
  • measures to restrict who can access systems and data held by ONS;
  • controls to guard against staff or contractors misusing their legitimate access to data; including vetting to an appropriate level for the sensitivity of data to which they might have access.

One of the things I love about working in the public sector is that our work can be shared openly.

We live in a rapidly changing and developing digital world and we will continue to monitor and assess the data standards and security measures in place to ensure they remain strong and effective. So, as well as sharing this work openly to reassure all our data suppliers that we’re taking good care of their data, we’re also seeking feedback on our revised data policies.

The same data can provide different insights when viewed through different lenses or in different combinations. The more data is shared – with the appropriate safeguards of course – the more it has to give.

If you work with data, you’ll know that collaborating with others in this space is key and that we need to be able to share data more easily when it makes sense to do so. So, the second reason for sharing this work openly is that, if you’re in the technical space, we’d value your feedback on our approach and if you’re in the data space and would like to adopt the same approach, we’d love to support you with that – so that we can all share data more easily in the future….(More)

ONS’s revised policies on the use, management and security of data can befound here.

Gradually, Then Suddenly


Blogpost by Tim O’Reilly: “There’s a passage in Ernest Hemingway’s novel The Sun Also Rises in which a character named Mike is asked how he went bankrupt. “Two ways,” he answers. “Gradually, then suddenly.”

Technological change happens in much the same way. Small changes accumulate, and suddenly the world is a different place. Throughout my career at O’Reilly Media, we’ve tracked and fostered a lot of “gradually, then suddenly” movements: the World Wide Web, open source software, big data, cloud computing, sensors and ubiquitous computing, and now the pervasive effects of AI and algorithmic systems on society and the economy.

What are some of the things that are in the middle of their “gradually, then suddenly” transition right now? The list is long; here are a few of the areas that are on my mind.

1) AI and algorithms are everywhere

The most important trend for readers of this newsletter to focus on is the development of new kinds of partnership between human and machine. We take for granted that algorithmic systems do much of the work at online sites like Google, Facebook, Amazon, and Twitter, but we haven’t fully grasped the implications. These systems are hybrids of human and machine. Uber, Lyft, and Amazon Robotics brought this pattern to the physical world, reframing the corporation as a vast, buzzing network of humans both guiding and guided by machines. In these systems, the algorithms decide who gets what and why; they’re changing the fundamentals of market coordination in ways that gradually, then suddenly, will become apparent.

2) The rest of the world is leapfrogging the US

The volume of mobile payments in China is $13 trillion versus the US’s $50 billion, while credit cards never took hold. Already Zipline’s on-demand drones are delivering 20% of all blood supplies in Rwanda and will be coming soon to other countries (including the US). In each case, the lack of existing infrastructure turned out to be an advantage in adopting a radically new model. Expect to see this pattern recur, as incumbents and old thinking hold back the adoption of new models..

9) The crisis of faith in government

Ever since Jennifer Pahlka and I began working on the Gov 2.0 Summit back in 2008, we’ve been concerned that if we can’t get government up to speed on 21st century technology, a critical pillar of the good society will crumble. When we started that effort, we were focused primarily on government innovation; over time, through Jen’s work at Code for America and the United States Digital Service, that shifted to a focus on making sure that government services actually work for those who need them most. Michael Lewis’s latest book, The Fifth Risk, highlights just how bad things might get if we continue to neglect and undermine the machinery of government. It’s not just the political fracturing of our country that should concern us; it’s the fact that government plays a critical role in infrastructure, in innovation, and in the safety net. That role has gradually been eroded, and the cracks that are appearing in the foundation of our society are coming at the worst possible time….(More)”.

Paying Users for Their Data Would Exacerbate Digital Inequality


Blog post by Eline Chivot: “Writing ever more complicated and intrusive regulations rules about data processing and data use has become the new fad in policymaking. Many are lending an ear to tempting yet ill-advised proposals to treat personal data as traditional finite resource. The latest example can be found in an article, A Blueprint for a Better Digital Society, by Glen Weyl, an economist at Microsoft Research, and Jaron Lanier, a computer scientist and writer. Not content with Internet users being able to access many online services like Bing and Twitter for free, they want online users to be paid in cash for the data they provide. To say that this proposal is flawed is an understatement. Its flawed for three main reasons: 1) consumers would lose significant shared value in exchange for minimal cash compensation; 2) higher incomes individuals would benefit at the expense of the poor; and 3) transaction costs would increase substantially, further reducing value for consumers and limiting opportunities for businesses to innovate with the data.

Weyl and Lanier’s argument is motivated by the belief that because Internet users are getting so many valuable services—like search, email, maps, and social networking—for free, they must be paying with their data. Therefore, they argue, if users are paying with their data, they should get something in return. Never mind that they do get something in return: valuable digital services that they do not pay for monetarily. But Weyl and Lanier say this is not enough, and consumers should get more.

While this idea may sound good on paper, in practice, it would be a disaster.

…Weyl and Lanier’s self-declared objective is to ensure digital dignity, but in practice this proposal would disrupt the equal treatment users receive from digital services today by valuing users based on their net worth. In this techno-socialist nirvana, to paraphrase Orwell, some pigs would be more equal than others. The French Data Protection Authority, CNIL, itself raised concerns about treating data as a commodity, warning that doing so would jeopardize society’s humanist values and fundamental rights which are, in essence, priceless.

To ensure “a better digital society,” companies should continue to be allowed to decide the best Internet business models based on what consumers demand. Data is neither cash nor a commodity, and pursuing policies based on this misconception will damage the digital economy and make the lives of digital consumers considerably worse….(More)”.

Beyond the IRB: Towards a typology of research ethics in applied economics


Paper by Michler, Jeffrey D., Masters, William A. and Josephson, Anna: “Conversations about ethics often appeal to those responsible for the ethical behavior, encouraging adoption of “better,” more ethical conduct. In this paper, we consider an alternative frame: a typology of ethical misconduct, focusing on who are the victims of various types of unethical behavior. The typology is constructed around 1) who may be harmed and 2) by what mechanism an individual or party is harmed. Building a typology helps to identify times in the life cycle of a research idea where differences exist between who is potentially harmed and who the existing ethical norms protect.

We discuss ethical practices including IRB approvals, which focuses almost entirely on risks to subjects; pre-analysis plans and conflict of interest disclosures, which encourage transparency so as to not mislead editors, reviewers, and readers; and self-plagiarism, which has become increasing common as authors slice their research ever more thinly, causing congestion in journals at the expense of others….(More)”.

Seven design principles for using blockchain for social impact


Stefaan Verhulst at Apolitical: “2018 will probably be remembered as the bust of the blockchain hype. Yet even as crypto currencies continue to sink in value and popular interest, the potential of using blockchain technologies to achieve social ends remains important to consider but poorly understood.

In 2019, business will continue to explore blockchain for sectors as disparate as finance, agriculture, logistics and healthcare. Policymakers and social innovators should also leverage 2019 to become more sophisticated about blockchain’s real promise, limitations  and current practice.

In a recent report I prepared with Andrew Young, with the support of the Rockefeller Foundation, we looked at the potential risks and challenges of using blockchain for social change — or “Blockchan.ge.” A number of implementations and platforms are already demonstrating potential social impact.

The technology is now being used to address issues as varied as homelessness in New York City, the Rohingya crisis in Myanmar and government corruption around the world.

In an illustration of the breadth of current experimentation, Stanford’s Center for Social Innovation recently analysed and mapped nearly 200 organisations and projects trying to create positive social change using blockchain. Likewise, the GovLab is developing a mapping of blockchange implementations across regions and topic areas; it currently contains 60 entries.

All these examples provide impressive — and hopeful — proof of concept. Yet despite the very clear potential of blockchain, there has been little systematic analysis. For what types of social impact is it best suited? Under what conditions is it most likely to lead to real social change? What challenges does blockchain face, what risks does it pose and how should these be confronted and mitigated?

These are just some of the questions our report, which builds its analysis on 10 case studies assembled through original research, seeks to address.

While the report is focused on identity management, it contains a number of lessons and insights that are applicable more generally to the subject of blockchange.

In particular, it contains seven design principles that can guide individuals or organisations considering the use of blockchain for social impact. We call these the Genesis principles, and they are outlined at the end of this article…(More)”.