The 2020 Edelman Trust Barometer


Edelman: “The 2020 Edelman Trust Barometer reveals that despite a strong global economy and near full employment, none of the four societal institutions that the study measures—government, business, NGOs and media—is trusted. The cause of this paradox can be found in people’s fears about the future and their role in it, which are a wake-up call for our institutions to embrace a new way of effectively building trust: balancing competence with ethical behavior…

Since Edelman began measuring trust 20 years ago, it has been spurred by economic growth. This continues in Asia and the Middle East, but not in developed markets, where income inequality is now the more important factor. A majority of respondents in every developed market do not believe they will be better off in five years’ time, and more than half of respondents globally believe that capitalism in its current form is now doing more harm than good in the world. The result is a world of two different trust realities. The informed public—wealthier, more educated, and frequent consumers of news—remain far more trusting of every institution than the mass population. In a majority of markets, less than half of the mass population trust their institutions to do what is right. There are now a record eight markets showing all-time-high gaps between the two audiences—an alarming trust inequality…

Distrust is being driven by a growing sense of inequity and unfairness in the system. The perception is that institutions increasingly serve the interests of the few over everyone. Government, more than any institution, is seen as least fair; 57 percent of the general population say government serves the interest of only the few, while 30 percent say government serves the interests of everyone….

Against the backdrop of growing cynicism around capitalism and the fairness of our current economic systems are deep-seated fears about the future. Specifically, 83 percent of employees say they fear losing their job, attributing it to the gig economy, a looming recession, a lack of skills, cheaper foreign competitors, immigrants who will work for less, automation, or jobs being moved to other countries….(More)”.

Tech groups cannot be allowed to hide from scrutiny


Marietje Schaake at the Financial Times: “Technology companies have governments over a barrel. Whether they are maximising traffic flow efficiency, matching pupils with their school preferences, trying to anticipate drought based on satellite and soil data, most governments heavily rely on critical infrastructure and artificial intelligence developed by the private sector. This growing dependence has profound implications for democracy.

An unprecedented information asymmetry is growing between companies and governments. We can see this in the long-running investigation into interference in the 2016 US presidential elections. Companies build voter registries, voting machines and tallying tools, while social media companies sell precisely targeted advertisements using information gleaned by linking data on friends, interests, location, shopping and search.

This has big privacy and competition implications, yet oversight is minimal. Governments, researchers and citizens risk being blindsided by the machine room that powers our lives and vital aspects of our democracies. Governments and companies have fundamentally different incentives on transparency and accountability.

While openness is the default and secrecy the exception for democratic governments, companies resist providing transparency about their algorithms and business models. Many of them actively prevent accountability, citing rules that protect trade secrets.

We must revisit these protections when they shield companies from oversight. There is a place for protecting proprietary information from commercial competitors, but the scope and context need to be clarified and balanced when they have an impact on democracy and the rule of law.

Regulators must act to ensure that those designing and running algorithmic processes do not abuse trade secret protections. Tech groups also use the EU’s General Data Protection Regulation to deny access to company information. Although the regulation was enacted to protect citizens against the mishandling of personal data, it is now being wielded cynically to deny scientists access to data sets for research. The European Data Protection Supervisor has intervened, but problems could recur. To mitigate concerns about the power of AI, provider companies routinely promise that the applications will be understandable, explainable, accountable, reliable, contestable, fair and — don’t forget — ethical.

Yet there is no way to test these subjective notions without access to the underlying data and information. Without clear benchmarks and information to match, proper scrutiny of the way vital data is processed and used will be impossible….(More)”.

One Nation Tracked: An investigation into the smartphone tracking industry


Stuart A. Thompson and Charlie Warzel at the New York Times: “…For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.

Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.

For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.

Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?

Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?

What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?

If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?

Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?

The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.

Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself….(More)”.

Between Truth and Power The Legal Constructions of Informational Capitalism


Book by Julie Cohen: “Our current legal system is to a great extent the product of an earlier period of social and economic transformation. From the late nineteenth century through the mid-twentieth century, as accountability for industrial-age harms became a pervasive source of conflict, the U.S. legal system underwent profound, tectonic shifts. Today, ownership of information-age resources and accountability for information-age harms have become pervasive sources of conflict, and different kinds of change are emerging.

In Between Truth and Power, Julie E. Cohen explores the relationships between legal institutions and political and economic transformation. Systematically examining struggles over the conditions of information flow and the design of information architectures and business models, she argues that as law is enlisted to help produce the profound economic and socio-technical shifts that have accompanied the emergence of the informational economy, it is too is transforming in fundamental ways. Drawing on elements from legal theory, science and technology studies, information studies, communication studies and organization studies to develop a complex theory of institutional change, Cohen develops an account of the gradual emergence of legal institutions adapted to the information age and of the power relationships that such institutions reflect and reproduce….(More)”.

Technology & the Law of Corporate Responsibility – The Impact of Blockchain


Blogpost by Elizabeth Boomer: “Blockchain, a technology regularly associated with digital currency, is increasingly being utilized as a corporate social responsibility tool in major international corporations. This intersection of law, technology, and corporate responsibility was addressed earlier this month at the World Bank Law, Justice, and Development Week 2019, where the theme was Rights, Technology and Development. The law related to corporate responsibility for sustainable development is increasingly visible due in part to several lawsuits against large international corporations, alleging the use of child and forced labor. In addition, the United Nations has been working for some time on a treaty on business and human rights to encourage corporations to avoid “causing or contributing to adverse human rights impacts through their own activities and [to] address such impacts when they occur.”

DeBeersVolvo, and Coca-Cola, among other industry leaders, are using blockchain, a technology that allows digital information to be distributed and analyzed, but not copied or manipulated, to trace the source of materials and better manage their supply chains. These initiatives have come as welcome news in industries where child or forced labor in the supply chain can be hard to detect, e.g. conflict minerals, sugar, tobacco, and cacao. The issue is especially difficult when trying to trace the mining of cobalt for lithium ion batteries, increasingly used in electric cars, because the final product is not directly traceable to a single source.

While non governmental organizations (NGOs) have been advocating for improved corporate performance in supply chains regarding labor and environmental standards for years, blockchain may be a technological tool that could reliably trace information regarding various products – from food to minerals – that go through several layers of suppliers before being certified as slave- or child labor- free.

Child labor and forced labor are still common in some countries. The majority of countries worldwide have ratified International Labour Organization (ILO) Convention No. 182, prohibiting the worst forms of child labor (186 ratifications), as well as the ILO Convention prohibiting forced labor (No. 29, with 178 ratifications), and the abolition of forced labor (Convention No. 105, with 175 ratifications). However, the ILO estimates that approximately 40 million men and women are engaged in modern day slavery and 152 million children are subject to child labor, 38% of whom are working in hazardous conditions. The enduring existence of forced labor and child labor raises difficult ethical questions, because in many contexts, the victim does not have a viable alternative livelihood….(More)”.

Artificial intelligence: From expert-only to everywhere


Deloitte: “…AI consists of multiple technologies. At its foundation are machine learning and its more complex offspring, deep-learning neural networks. These technologies animate AI applications such as computer vision, natural language processing, and the ability to harness huge troves of data to make accurate predictions and to unearth hidden insights (see sidebar, “The parlance of AI technologies”). The recent excitement around AI stems from advances in machine learning and deep-learning neural networks—and the myriad ways these technologies can help companies improve their operations, develop new offerings, and provide better customer service at a lower cost.

The trouble with AI, however, is that to date, many companies have lacked the expertise and resources to take full advantage of it. Machine learning and deep learning typically require teams of AI experts, access to large data sets, and specialized infrastructure and processing power. Companies that can bring these assets to bear then need to find the right use cases for applying AI, create customized solutions, and scale them throughout the company. All of this requires a level of investment and sophistication that takes time to develop, and is out of reach for many….

These tech giants are using AI to create billion-dollar services and to transform their operations. To develop their AI services, they’re following a familiar playbook: (1) find a solution to an internal challenge or opportunity; (2) perfect the solution at scale within the company; and (3) launch a service that quickly attracts mass adoption. Hence, we see Amazon, Google, Microsoft, and China’s BATs launching AI development platforms and stand-alone applications to the wider market based on their own experience using them.

Joining them are big enterprise software companies that are integrating AI capabilities into cloud-based enterprise software and bringing them to the mass market. Salesforce, for instance, integrated its AI-enabled business intelligence tool, Einstein, into its CRM software in September 2016; the company claims to deliver 1 billion predictions per day to users. SAP integrated AI into its cloud-based ERP system, S4/HANA, to support specific business processes such as sales, finance, procurement, and the supply chain. S4/HANA has around 8,000 enterprise users, and SAP is driving its adoption by announcing that the company will not support legacy SAP ERP systems past 2025.

A host of startups is also sprinting into this market with cloud-based development tools and applications. These startups include at least six AI “unicorns,” two of which are based in China. Some of these companies target a specific industry or use case. For example, Crowdstrike, a US-based AI unicorn, focuses on cybersecurity, while Benevolent.ai uses AI to improve drug discovery.

The upshot is that these innovators are making it easier for more companies to benefit from AI technology even if they lack top technical talent, access to huge data sets, and their own massive computing power. Through the cloud, they can access services that address these shortfalls—without having to make big upfront investments. In short, the cloud is democratizing access to AI by giving companies the ability to use it now….(More)”.

Beyond the Valley


Book by Ramesh Srinivasan: “How to repair the disconnect between designers and users, producers and consumers, and tech elites and the rest of us: toward a more democratic internet.

In this provocative book, Ramesh Srinivasan describes the internet as both an enabler of frictionless efficiency and a dirty tangle of politics, economics, and other inefficient, inharmonious human activities. We may love the immediacy of Google search results, the convenience of buying from Amazon, and the elegance and power of our Apple devices, but it’s a one-way, top-down process. We’re not asked for our input, or our opinions—only for our data. The internet is brought to us by wealthy technologists in Silicon Valley and China. It’s time, Srinivasan argues, that we think in terms beyond the Valley.

Srinivasan focuses on the disconnection he sees between designers and users, producers and consumers, and tech elites and the rest of us. The recent Cambridge Analytica and Russian misinformation scandals exemplify the imbalance of a digital world that puts profits before inclusivity and democracy. In search of a more democratic internet, Srinivasan takes us to the mountains of Oaxaca, East and West Africa, China, Scandinavia, North America, and elsewhere, visiting the “design labs” of rural, low-income, and indigenous people around the world. He talks to a range of high-profile public figures—including Elizabeth Warren, David Axelrod, Eric Holder, Noam Chomsky, Lawrence Lessig, and the founders of Reddit, as well as community organizers, labor leaders, and human rights activists. To make a better internet, Srinivasan says, we need a new ethic of diversity, openness, and inclusivity, empowering those now excluded from decisions about how technologies are designed, who profits from them, and who are surveilled and exploited by them….(More)”

Data Ownership: Exploring Implications for Data Privacy Rights and Data Valuation


Hearing by the Senate Committee on Banking, Housing and Urban Affairs:”…As a result of an increasingly digital economy, more personal information is available to companies than ever before.
Private companies are collecting, processing, analyzing and sharing considerable data on individuals for all kinds of purposes.

There have been many questions about what personal data is being collected, how it is being collected, with whom it is being shared and how it is being used, including in ways that affect individuals’ financial lives.

Given the vast amount of personal information flowing through the economy, individuals need real control over their personal data. This Committee has held a series of data privacy hearings exploring possible
frameworks for facilitating privacy rights to consumers. Nearly all have included references to data as a new currency or commodity.

The next question, then, is who owns it? There has been much debate about the concept of data ownership, the monetary value of personal information and its potential role in data privacy…..The witnesses will be: 

  1. Mr. Jeffrey Ritter Founding Chair, American Bar Association Committee on Cyberspace Law, External Lecturer
  2. Mr. Chad Marlow Senior Advocacy And Policy Counsel American Civil Liberties Union
  3. Mr. Will Rinehart Director Of Technology And Innovation Policy American Action Forum
  4. Ms. Michelle Dennedy Chief Executive Officer DrumWave Inc.

Should Consumers Be Able to Sell Their Own Personal Data?


The Wall Street Journal: “People around the world are confused and concerned about what companies do with the data they collect from their interactions with consumers.

A global survey conducted last fall by the research firm Ipsos gives a sense of the scale of people’s worries and uncertainty. Roughly two-thirds of those surveyed said they knew little or nothing about how much data companies held about them or what companies did with that data. And only about a third of respondents on average said they had at least a fair amount of trust that a variety of corporate and government organizations would use the information they had about them in the right way….

Christopher Tonetti, an associate professor of economics at Stanford Graduate School of Business, says consumers should own and be able to sell their personal data. Cameron F. Kerry, a visiting fellow at the Brookings Institution and former general counsel and acting secretary of the U.S. Department of Commerce, opposes the idea….

YES: It Would Encourage Sharing of Data—a Plus for Consumers and Society…Data isn’t like other commodities in one fundamental way—it doesn’t diminish with use. And that difference is the key to why consumers should own the data that’s created when they interact with companies, and have the right to sell it.YES: It Would Encourage Sharing of Data—a Plus for Consumers and Society…

NO: It Would Do Little to Help Consumers, and Could Leave Them Worse Off Than Now…

But owning data will do little to help consumers’ privacy—and may well leave them worse off. Meanwhile, consumer property rights would create enormous friction for valid business uses of personal information and for the free flow of information we value as a society.

In our current system, consumers reflexively click away rights to data in exchange for convenience, free services, connection, endorphins or other motivations. In a market where consumers could sell or license personal information they generate from web browsing, ride-sharing apps and other digital activities, is there any reason to expect that they would be less motivated to share their information? …(More)”.

The Ethics of Big Data Applications in the Consumer Sector


Paper by Markus Christen et al : “Business applications relying on processing of large amounts of heterogeneous data (Big Data) are considered to be key drivers of innovation in the digital economy. However, these applications also pose ethical issues that may undermine the credibility of data-driven businesses. In our contribution, we discuss ethical problems that are associated with Big Data such as: How are core values like autonomy, privacy, and solidarity affected in a Big Data world? Are some data a public good? Or: Are we obliged to divulge personal data to a certain degree in order to make the society more secure or more efficient?

We answer those questions by first outlining the ethical topics that are discussed in the scientific literature and the lay media using a bibliometric approach. Second, referring to the results of expert interviews and workshops with practitioners, we identify core norms and values affected by Big Data applications—autonomy, equality, fairness, freedom, privacy, property-rights, solidarity, and transparency—and outline how they are exemplified in examples of Big Data consumer applications, for example, in terms of informational self-determination, non-discrimination, or free opinion formation. Based on use cases such as personalized advertising, individual pricing, or credit risk management we discuss the process of balancing such values in order to identify legitimate, questionable, and unacceptable Big Data applications from an ethics point of view. We close with recommendations on how practitioners working in applied data science can deal with ethical issues of Big Data….(More)”.