Consumer Reports Study Finds Marketplace Demand for Privacy and Security


Press Release: “American consumers are increasingly concerned about privacy and data security when purchasing new products and services, which may be a competitive advantage to companies that take action towards these consumer values, a new Consumer Reports study finds. 

The new study, “Privacy Front and Center” from CR’s Digital Lab with support from Omidyar Network, looks at the commercial benefits for companies that differentiate their products based on privacy and data security. The study draws from a nationally representative CR survey of 5,085 adult U.S. residents conducted in February 2020, a meta-analysis of 25 years of public opinion studies, and a conjoint analysis that seeks to quantify how consumers weigh privacy and security in their hardware and software purchasing decisions. 

“This study shows that raising the standard for privacy and security is a win-win for consumers and the companies,” said Ben Moskowitz, the director of the Digital Lab at Consumer Reports. “Given the rapid proliferation of internet connected devices, the rise in data breaches and cyber attacks, and the demand from consumers for heightened privacy and security measures, there’s an undeniable business case for companies to invest in creating more private and secure products.” 

Here are some of the key findings from the study:

  • According to CR’s February 2020 nationally representative survey, 74% of consumers are at least moderately concerned about the privacy of their personal data.
  • Nearly all Americans (96%) agree that more should be done to ensure that companies protect the privacy of consumers.
  • A majority of smart product owners (62%) worry about potential loss of privacy when buying them for their home or family.
  • The privacy/security conscious consumer class seems to include more men and people of color.
  • Experiencing a data breach correlates with a higher willingness to pay for privacy, and 30% of Americans have experienced one.
  • Of the Android users who switched to iPhones, 32% indicated doing so because of Apple’s perceived privacy or security benefits relative to Android….(More)”.

Data Sharing 2.0: New Data Sharing, New Value Creation


MIT CISR research:”…has found that interorganizational data sharing is a top concern of companies; leaders often find data sharing costly, slow, and risky. Interorganizational data sharing, however, is requisite for new value creation in the digital economy. Digital opportunities require data sharing 2.0: cross-company sharing of complementary data assets and capabilities, which fills data gaps and allows companies, often collaboratively, to develop innovative solutions. This briefing introduces three sets of practices—curated content, designated channels, and repeatable controls—that help companies accelerate data sharing 2.0….(More)”.

When Do We Trust AI’s Recommendations More Than People’s?


Chiara Longoni and Luca Cian at Harvard Business School: “More and more companies are leveraging technological advances in machine learning, natural language processing, and other forms of artificial intelligence to provide relevant and instant recommendations to consumers. From Amazon to Netflix to REX Real Estate, firms are using AI recommenders to enhance the customer experience. AI recommenders are also increasingly used in the public sector to guide people to essential services. For example, the New York City Department of Social Services uses AI to give citizens recommendations on disability benefits, food assistance, and health insurance.

However, simply offering AI assistance won’t necessarily lead to more successful transactions. In fact, there are cases when AI’s suggestions and recommendations are helpful and cases when they might be detrimental. When do consumers trust the word of a machine, and when do they resist it? Our research suggests that the key factor is whether consumers are focused on the functional and practical aspects of a product (its utilitarian value) or focused on the experiential and sensory aspects of a product (its hedonic value).

In an article in the Journal of Marketing — based on data from over 3,000 people who took part in 10 experiments — we provide evidence supporting for what we call a word-of-machine effect: the circumstances in which people prefer AI recommenders to human ones.

The word-of-machine effect.

The word-of-machine effect stems from a widespread belief that AI systems are more competent than humans in dispensing advice when utilitarian qualities are desired and are less competent when the hedonic qualities are desired. Importantly, the word-of-machine effect is based on a lay belief that does not necessarily correspond to the reality. The fact of the matter is humans are not necessarily less competent than AI at assessing and evaluating utilitarian attributes. Vice versa, AI is not necessarily less competent than humans at assessing and evaluating hedonic attributes….(More)”.

Digital Disruption


Book by Bharat Vagadia: “Implications and opportunities for Economies, Society, Policy Makers and Business Leaders: “This book goes beyond the hype, delving into real world technologies and applications that are driving our future and examines the possible impact these changes will have on industries, economies and society at large. It details the actions governments and regulators must take in order to ensure these changes bring about positive benefits to the public without stifling innovation that may well be the future source of value creation. It examines how organisations in a world of digital ecosystems, where industry boundaries are blurring, must undertake radical digital transformation to survive and thrive in this new digital world. The reader is taken through a framework that critically examines (i) Digital Connectivity including 5G and IoT; (ii) Data Capture and Distribution which includes smart connected verticals; (iii) Data Integrity, Control and Tokenisation that includes cyber security, digital signatures, blockchain, smart contracts, digital assets and cryptocurrencies; (iv) Data Processing and Artificial Intelligence; and (v) Disruptive Applications which include platforms, virtual and augmented reality, drones, autonomous vehicles, digital twins and digital assistants…(More)”.

How Competition Impacts Data Privacy


Paper by Aline Blankertz: “A small number of large digital platforms increasingly shape the space for most online interactions around the globe and they often act with hardly any constraint from competing services. The lack of competition puts those platforms in a powerful position that may allow them to exploit consumers and offer them limited choice. Privacy is increasingly considered one area in which the lack of competition may create harm. Because of these concerns, governments and other institutions are developing proposals to expand the scope for competition authorities to intervene to limit the power of the large platforms and to revive competition.  


The first case that has explicitly addressed anticompetitive harm to privacy is the German Bundeskartellamt’s case against Facebook in which the authority argues that imposing bad privacy terms can amount to an abuse of dominance. Since that case started in 2016, more cases deal with the link between competition and privacy. For example, the proposed Google/Fitbit merger has raised concerns about sensitive health data being merged with existing Google profiles and Apple is under scrutiny for not sharing certain personal data while using it for its own services.

However, addressing bad privacy outcomes through competition policy is effective only if those outcomes are caused, at least partly, by a lack of competition. Six distinct mechanisms can be distinguished through which competition may affect privacy, as summarized in Table 1. These mechanisms constitute different hypotheses through which less competition may influence privacy outcomes and lead either to worse privacy in different ways (mechanisms 1-5) or even better privacy (mechanism 6). The table also summarizes the available evidence on whether and to what extent the hypothesized effects are present in actual markets….(More)”.

Business-to-Business Data Sharing: An Economic and Legal Analysis


Paper by Bertin Martens et al: “The European Commission announced in its Data Strategy (2020) its intentions to propose an enabling legislative framework for the governance of common European data spaces, to review and operationalize data portability, to prioritize standardization activities and foster data interoperability and to clarify usage rights for co-generated IoT data. This Strategy starts from the premise that there is not enough data sharing and that much data remain locked up and are not available for innovative re-use. The Commission will also consider the adoption of a New Competition Tool as well as the adoption of ex ante regulation for large online gate-keeping platforms as part of the announced Digital Services Act Package . In this context, the goal of this report is to examine the obstacles to Business-to-Business (B2B) data sharing: what keeps businesses from sharing or trading more of their data with other businesses and what can be done about it? For this purpose, this report uses the well-known tools of legal and economic thinking about market failures. It starts from the economic characteristics of data and explores to what extent private B2B data markets result in a socially optimal degree of data sharing, or whether there are market failures in data markets that might justify public policy intervention.

It examines the conditions under which monopolistic data market failures may occur. It contrasts these welfare losses with the welfare gains from economies of scope in data aggregation in large pools. It also discusses other potential sources of B2B data market failures due to negative externalities, risks and transaction costs and asymmetric information situations. In a next step, the paper explores solutions to overcome these market failures. Private third-party data intermediaries may be in a position to overcome market failures due to high transactions costs and risks. They can aggregate data in large pools to harvest the benefits of economies of scale and scope in data. Where third-party intervention fails, regulators can step in, with ex-post competition instruments and with ex-ante regulation. The latter includes data portability rights for personal data and mandatory data access rights….(More)”.

How to destroy Surveillance Capitalism


Book by Cory Doctorow: “…Today, there is a widespread belief that machine learning and commercial surveillance can turn even the most fumble-tongued conspiracy theorist into a svengali who can warp your perceptions and win your belief by locating vulnerable people and then pitching them with A.I.-refined arguments that bypass their rational faculties and turn everyday people into flat Earthers, anti-vaxxers, or even Nazis. When the RAND Corporation blames Facebook for “radicalization” and when Facebook’s role in spreading coronavirus misinformation is blamed on its algorithm, the implicit message is that machine learning and surveillance are causing the changes in our consensus about what’s true.

After all, in a world where sprawling and incoherent conspiracy theories like Pizzagate and its successor, QAnon, have widespread followings, something must be afoot.

But what if there’s another explanation? What if it’s the material circumstances, and not the arguments, that are making the difference for these conspiracy pitchmen? What if the trauma of living through real conspiracies all around us — conspiracies among wealthy people, their lobbyists, and lawmakers to bury inconvenient facts and evidence of wrongdoing (these conspiracies are commonly known as “corruption”) — is making people vulnerable to conspiracy theories?

If it’s trauma and not contagion — material conditions and not ideology — that is making the difference today and enabling a rise of repulsive misinformation in the face of easily observed facts, that doesn’t mean our computer networks are blameless. They’re still doing the heavy work of locating vulnerable people and guiding them through a series of ever-more-extreme ideas and communities.

Belief in conspiracy is a raging fire that has done real damage and poses real danger to our planet and species, from epidemics kicked off by vaccine denial to genocides kicked off by racist conspiracies to planetary meltdown caused by denial-inspired climate inaction. Our world is on fire, and so we have to put the fires out — to figure out how to help people see the truth of the world through the conspiracies they’ve been confused by.

But firefighting is reactive. We need fire prevention. We need to strike at the traumatic material conditions that make people vulnerable to the contagion of conspiracy. Here, too, tech has a role to play.

There’s no shortage of proposals to address this. From the EU’s Terrorist Content Regulation, which requires platforms to police and remove “extremist” content, to the U.S. proposals to force tech companies to spy on their users and hold them liable for their users’ bad speech, there’s a lot of energy to force tech companies to solve the problems they created.

There’s a critical piece missing from the debate, though. All these solutions assume that tech companies are a fixture, that their dominance over the internet is a permanent fact. Proposals to replace Big Tech with a more diffused, pluralistic internet are nowhere to be found. Worse: The “solutions” on the table today require Big Tech to stay big because only the very largest companies can afford to implement the systems these laws demand….(More)”.

Too many AI researchers think real-world problems are not relevant


Essay by Hannah Kerner: “Any researcher who’s focused on applying machine learning to real-world problems has likely received a response like this one: “The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. I’ve seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and I’ve heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, or—in the case of deep learning—a new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word “application” seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors’ only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then….(More)”.

Personal data, public data, privacy & power: GDPR & company data


Open Corporates: “…there are three other aspects which are relevant when talking about access to EU company data.

Cargo-culting GDPR

The first, is a tendency to take this complex and subtle legislation that is GDPR and use a poorly understood version in other legislation and regulation, even if that regulation is already covered by GDPR. This actually undermines the GDPR regime, and prevents it from working effectively, and should strongly be resisted. In the tech world, such approaches are called ‘cargo-culting’.

Similarly GDPR is often used as an excuse for not releasing company information as open data, even when the same data is being sold to third parties apparently without concerns — if one is covered by GDPR, the other certainly should be.

Widened power asymmetries

The second issue is the unintended consequences of GDPR, specifically the way it increases asymmetries of power and agency. For example, something like the so-called Right To Be Forgotten takes very significant resources to implement, and so actually strengthens the position of the giant tech companies — for such companies, investing millions in large teams to decide who should and should not be given the Right To Be Forgotten is just a relatively small cost of doing business.

Another issue is the growth of a whole new industry dedicated to removing traces of people’s past from the internet (2), which is also increasing the asymmetries of power. The vast majority of people are not directors of companies, or beneficial owners, and it is only the relatively rich and powerful (including politicians and criminals) who can afford lawyers to stifle free speech, or remove parts of their past they would rather not be there, from business failures to associations with criminals.

OpenCorporates, for example, was threatened with a lawsuit from a member of one of the wealthiest families in Europe for reproducing a gazette notice from the Luxembourg official gazette (a publication that contains public notices). We refused to back down, believing we had a good case in law and in the public interest, and the other side gave up. But such so-called SLAPP suits are becoming increasingly common, although unlike many US states there are currently no defences in place to resist these in the EU, despite pressure from civil society to address this….

At the same time, the automatic assumption that all Personally Identifiable Information (PII), someone’s name for example, is private is highly problematic, confusing both citizens and policy makers, and further undermining democracies and fair societies. As an obvious case, it’s critical that we know the names of our elected representatives, and those in positions of power, otherwise we would have an opaque society where decisions are made by nameless individuals with opaque agendas and personal interests — such as a leader awarding a contract to their brother’s company, for example.

As the diagram below illustrates, there is some personally identifiable information that it’s strongly in the public interest to know. Take the director or beneficial owner of a company, for example, of course their details are PII — clearly you need to know their name (and other information too), otherwise what actually do you know about them, or the company (only that some unnamed individual has been given special protection under law to be shielded from the company’s debts and actions, and yet can benefit from its profits)?

On the other hand, much of the data which is truly about our privacy — the profiles, inferences and scores that companies store on us — is explicitly outside GDPR, if it doesn’t contain PII.

Image for post

Hopefully, as awareness of the issues increases, we will develop a more nuanced, deeper, understanding of privacy, such that case law around GDPR, and successors to this legislation begin to rebalance and case law starts to bring clarity to the ambiguities of the GDPR….(More)”.

Terms of Disservice: How Silicon Valley is Destructive by Design


Book by Dipayan Ghosh: “Designing a new digital social contact for our technological future…High technology presents a paradox. In just a few decades, it has transformed the world, making almost limitless quantities of information instantly available to billions of people and reshaping businesses, institutions, and even entire economies. But it also has come to rule our lives, addicting many of us to the march of megapixels across electronic screens both large and small.

Despite its undeniable value, technology is exacerbating deep social and political divisions in many societies. Elections influenced by fake news and unscrupulous hidden actors, the cyber-hacking of trusted national institutions, the vacuuming of private information by Silicon Valley behemoths, ongoing threats to vital infrastructure from terrorist groups and even foreign governments—all these concerns are now part of the daily news cycle and are certain to become increasingly serious into the future.

In this new world of endless technology, how can individuals, institutions, and governments harness its positive contributions while protecting each of us, no matter who or where we are?

In this book, a former Facebook public policy adviser who went on to assist President Obama in the White House offers practical ideas for using technology to create an open and accessible world that protects all consumers and civilians. As a computer scientist turned policymaker, Dipayan Ghosh answers the biggest questions about technology facing the world today. Proving clear and understandable explanations for complex issues, Terms of Disservice will guide industry leaders, policymakers, and the general public as we think about how we ensure that the Internet works for everyone, not just Silicon Valley….(More)”.