The Flip Side of Free: Understanding the Economics of the Internet


Book by Michael Kende: “The upside of the Internet is free Wi-Fi at Starbucks, Facetime over long distances, and nearly unlimited data for downloading or streaming. The downside is that our data goes to companies that use it to make money, our financial information is exposed to hackers, and the market power of technology companies continues to increase. In The Flip Side of Free, Michael Kende shows that free Internet comes at a price. We’re beginning to realize this. Our all-purpose techno-caveat is “I love my smart speaker,” but is it really tracking everything I do? listening to everything I say?

Kende explains the unique economics of the Internet and the paradoxes that result. The most valuable companies in the world are now Internet companies, built on data often exchanged for free content and services. Many users know the impact of this trade-off on privacy but continue to use the services anyway. Moreover, although the Internet lowers barriers for companies to enter markets, it is hard to compete with the largest providers. We complain about companies having too much data, but developing countries without widespread Internet usage may suffer from the reverse: not enough data collection for the development of advanced services—which leads to a worsening data divide between developed and developing countries.

What’s the future of free? Data is the price of free service, and the new currency of the Internet age. There’s nothing necessarily wrong with free, Kende says, as long as we anticipate and try to mitigate what’s on the flip side…(More)”.

How Digital Trust Varies Around the World


Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi at Harvard Business Review: “As economies around the world digitalize rapidly in response to the pandemic, one component that can sometimes get left behind is user trust. What does it take to build out a digital ecosystem that users will feel comfortable actually using? To answer this question, the authors explored four components of digital trust: the security of an economy’s digital environment; the quality of the digital user experience; the extent to which users report trust in their digital environment; and the extent to which users actually use the digital tools available to them. They then used almost 200 indicators to rank 42 global economies on their performance in each of these four metrics, finding a number of interesting trends around how different economies have developed mechanisms for engendering trust, as well as how different types of trust do — or don’t — correspond to other digital development metrics…(More)”.

Building trust in AI systems is essential


Editorial Board of the Financial Times: “…Most of the biggest tech companies, which have been at the forefront of the AI revolution, are well aware of the risks of deploying flawed systems at scale. Tech companies publicly acknowledge the need for societal acceptance if their systems are to be trusted. Although historically allergic to government intervention, some industry bosses are even calling for stricter regulation in areas such as privacy and facial recognition technology.

A parallel is often drawn between two conferences held in Asilomar, California, in 1975 and 2017. At the first, a group of biologists, lawyers and doctors created a set of ethical guidelines around research into recombinant DNA. This opened an era of responsible and fruitful biomedical research that has helped us deal with the Covid-19 pandemic today. Inspired by the example, a group of AI experts repeated the exercise 42 years later and came up with an impressive set of guidelines for the beneficial use of the technology. 

Translating such high principles into everyday practice is hard, especially when so much money is at stake. But three rules should always apply. First, teams that develop AI systems must be as diverse as possible to reduce the risk of bias. Second, complex AI systems should never be deployed in any field unless they offer a demonstrable improvement on what already exists. Third, algorithms that companies and governments deploy in sensitive areas such as healthcare, education, policing, justice and workplace monitoring should be subject to audit and comprehension by outside experts. 

The US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the US Food and Drug Administration to preapprove the use of AI in sensitive areas. Criminal liability for those who deploy irresponsible AI systems might also help concentrate minds.

The AI industry has talked a good game about AI ethics. But if some of the most sophisticated companies in this field cannot even convince their own employees of their good intentions, they will struggle to convince anyone else. That could result in a fierce public backlash against companies using AI. Worse, it may yet impede the real benefits of using AI for societal good in areas such as healthcare. The tech sector has to restore credibility for all our sakes….(More)”

Practical Fairness


Book by Aileen Nielsen: “Fairness is becoming a paramount consideration for data scientists. Mounting evidence indicates that the widespread deployment of machine learning and AI in business and government is reproducing the same biases we’re trying to fight in the real world. But what does fairness mean when it comes to code? This practical book covers basic concerns related to data security and privacy to help data and AI professionals use code that’s fair and free of bias.

Many realistic best practices are emerging at all steps along the data pipeline today, from data selection and preprocessing to closed model audits. Author Aileen Nielsen guides you through technical, legal, and ethical aspects of making code fair and secure, while highlighting up-to-date academic research and ongoing legal developments related to fairness and algorithms.

  • Identify potential bias and discrimination in data science models
  • Use preventive measures to minimize bias when developing data modeling pipelines
  • Understand what data pipeline components implicate security and privacy concerns
  • Write data processing and modeling code that implements best practices for fairness
  • Recognize the complex interrelationships between fairness, privacy, and data security created by the use of machine learning models
  • Apply normative and legal concepts relevant to evaluating the fairness of machine learning models…(More)”.

Governance of Data Sharing: a Law & Economics Proposal


Paper by Jens Prufer and Inge Graef: “To prevent market tipping, which inhibits innovation, there is an urgent need to mandate sharing of user information in data-driven markets. Existing legal mechanisms to impose data sharing under EU competition law and data portability under the GDPR are not sufficient to tackle this problem. Mandated data sharing requires the design of a governance structure that combines elements of economically efficient centralization with legally necessary decentralization. We identify three feasible options. One is to centralize investigations and enforcement in a European Data Sharing Agency (EDSA), while decision-making power lies with National Competition Authorities in a Board of Supervisors. The second option is to set up a Data Sharing Cooperation Network coordinated through a European Data Sharing Board, with the National Competition Authority best placed to run the investigation adjudicating and enforcing the mandatory data-sharing decision across the EU. A third option is to mix both governance structures and to task national authorities to investigate and adjudicate and the EU-level EDSA with enforcement of data sharing….(More)”

Consumer Bureau To Decide Who Owns Your Financial Data


Article by Jillian S. Ambroz: “A federal agency is gearing up to make wide-ranging policy changes on consumers’ access to their financial data.

The Consumer Financial Protection Bureau (CFPB) is looking to implement the area of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act pertaining to a consumer’s rights to his or her own financial data. It is detailed in section 1033.

The agency has been laying the groundwork on this move for years, from requesting information in 2016 from financial institutions to hosting a symposium earlier this year on the problems of screen scraping, a risky but common method of collecting consumer data.

Now the agency, which was established by the Dodd-Frank Act, is asking for comments on this critical and controversial topic ahead of the proposed rulemaking. Unlike other regulations that affect single industries, this could be all-encompassing because the consumer data rule touches almost every market the agency covers, according to the story in American Banker.

The Trump administration all but ‘systematically neutered’ the agency.

With the ruling, the agency seeks to clarify its compliance expectations and help establish market practices to ensure consumers have access to consumer financial data. The agency sees an opportunity here to help shape this evolving area of financial technology, or fintech, recognizing both the opportunities and the risks to consumers as more fintechs become enmeshed with their data and day-to-day lives.

Its goal is “to better effectuate consumer access to financial records,” as stated in the regulatory filing….(More)”.

Commission proposes measures to boost data sharing and support European data spaces


Press Release: “To better exploit the potential of ever-growing data in a trustworthy European framework, the Commission today proposes new rules on data governance. The Regulation will facilitate data sharing across the EU and between sectors to create wealth for society, increase control and trust of both citizens and companies regarding their data, and offer an alternative European model to data handling practice of major tech platforms.

The amount of data generated by public bodies, businesses and citizens is constantly growing. It is expected to multiply by five between 2018 and 2025. These new rules will allow this data to be harnessed and will pave the way for sectoral European data spaces to benefit society, citizens and companies. In the Commission’s data strategy of February this year, nine such data spaces have been proposed, ranging from industry to energy, and from health to the European Green Deal. They will, for example, contribute to the green transition by improving the management of energy consumption, make delivery of personalised medicine a reality, and facilitate access to public services.

The Regulation includes:

  • A number of measures to increase trust in data sharing, as the lack of trust is currently a major obstacle and results in high costs.
  • Create new EU rules on neutrality to allow novel data intermediaries to function as trustworthy organisers of data sharing.
  • Measures to facilitate the reuse of certain data held by the public sector. For example, the reuse of health data could advance research to find cures for rare or chronic diseases.
  • Means to give Europeans control on the use of the data they generate, by making it easier and safer for companies and individuals to voluntarily make their data available for the wider common good under clear conditions….(More)”.

AI’s Wide Open: A.I. Technology and Public Policy


Paper by Lauren Rhue and Anne L. Washington: “Artificial intelligence promises predictions and data analysis to support efficient solutions for emerging problems. Yet, quickly deploying AI comes with a set of risks. Premature artificial intelligence may pass internal tests but has little resilience under normal operating conditions. This Article will argue that regulation of early and emerging artificial intelligence systems must address the management choices that lead to releasing the system into production. First, we present examples of premature systems in the Boeing 737 Max, the 2020 coronavirus pandemic public health response, and autonomous vehicle technology. Second, the analysis highlights relevant management practices found in our examples of premature AI. Our analysis suggests that redundancy is critical to protecting the public interest. Third, we offer three points of context for premature AI to better assess the role of management practices.

AI in the public interest should: 1) include many sensors and signals; 2) emerge from a broad range of sources; and 3) be legible to the last person in the chain. Finally, this Article will close with a series of policy suggestions based on this analysis. As we develop regulation for artificial intelligence, we need to cast a wide net to identify how problems develop within the technologies and through organizational structures….(More)”.

Taming Complexity


Martin Reeves , Simon Levin , Thomas Fink and Ania Levina at Harvard Business Review: “….“Complexity” is one of the most frequently used terms in business but also one of the most ambiguous. Even in the sciences it has numerous definitions. For our purposes, we’ll define it as a large number of different elements (such as specific technologies, raw materials, products, people, and organizational units) that have many different connections to one another. Both qualities can be a source of advantage or disadvantage, depending on how they’re managed.

Let’s look at their strengths. To begin with, having many different elements increases the resilience of a system. A company that relies on just a few technologies, products, and processes—or that is staffed with people who have very similar backgrounds and perspectives—doesn’t have many ways to react to unforeseen opportunities and threats. What’s more, the redundancy and duplication that also characterize complex systems typically give them more buffering capacity and fallback options.

Ecosystems with a diversity of elements benefit from adaptability. In biology, genetic diversity is the grist for natural selection, nature’s learning mechanism. In business, as environments shift, sustained performance requires new offerings and capabilities—which can be created by recombining existing elements in fresh ways. For example, the fashion retailer Zara introduces styles (combinations of components) in excess of immediate needs, allowing it to identify the most popular products, create a tailored selection from them, and adapt to fast-changing fashion as a result.

Another advantage that complexity can confer on natural ecosystems is better coordination. That’s because the elements are often highly interconnected. Flocks of birds or herds of animals, for instance, share behavioral protocols that connect the members to one another and enable them to move and act as a group rather than as an uncoordinated collection of individuals. Thus they realize benefits such as collective security and more-effective foraging.

Finally, complexity can confer inimitability. Whereas individual elements may be easily copied, the interrelationships among multiple elements are hard to replicate. A case in point is Apple’s attempt in 2012 to compete with Google Maps. Apple underestimated the complexity of Google’s offering, leading to embarrassing glitches in the initial versions of its map app, which consequently struggled to gain acceptance with consumers. The same is true of a company’s strategy: If its complexity makes it hard to understand, rivals will struggle to imitate it, and the company will benefit….(More)”.

Using Data and Respecting Users


“Three technical and legal approaches that create value from data and foster user trust” by Marshall Van Alstyne and Alisa Dagan Lenart: “Transaction data is like a friendship tie: both parties must respect the relationship and if one party exploits it the relationship sours. As data becomes increasingly valuable, firms must take care not to exploit their users or they will sour their ties. Ethical uses of data cover a spectrum: at one end, using patient data in healthcare to cure patients is little cause for concern. At the other end, selling data to third parties who exploit users is a serious cause for concern. Between these two extremes lies a vast gray area where firms need better ways to frame data risks and rewards in order to make better legal and ethical choices. This column provides a simple framework and threeways to respectfully improve data use….(More)”