Nearly Half of Canadian Consumers Willing to Share Significant Personal Data with Banks and Insurers in Exchange for Lower Pricing, Accenture Study Finds


Press Release: “Nearly half of Canadian consumers would be willing to share significant personal information, such as location data and lifestyle information, with their bank and insurer in exchange for lower pricing on products and services, according to a new report from Accenture (NYSE: ACN).

Consumers willing to share personal data in select scenarios. (CNW Group/Accenture)
Consumers willing to share personal data in select scenarios. (CNW Group/Accenture)

Accenture’s global Financial Services Consumer Study, based on a survey of 47,000 consumers in 28 countries which included 2,000 Canadians, found that more than half of consumers would share that data for benefits including more-rapid loan approvals, discounts on gym memberships and personalized offers based on current location.

At the same time, however, Canadian consumers believe that privacy is paramount, with nearly three quarters (72 per cent) saying they are very cautious about the privacy of their personal data. In fact, data security breaches were the second-biggest concern for consumers, behind only increasing costs, when asked what would make them leave their bank or insurer.

“Canadian consumers are willing to sharing their personal data in instances where it makes their lives easier but remain cautious of exactly how their information is being used,” said Robert Vokes, managing director of financial services at Accenture in Canada. “With this in mind, banks and insurers need to deliver hyper-relevant and highly convenient experience in order to remain relevant, retain trust and win customer loyalty in a digital economy.”

Consumers globally showed strong support for personalized insurance premiums, with 64 per cent interested in receiving adjusted car insurance premiums based on safe driving and 52 per cent in exchange for life insurance premiums tied to a healthy lifestyle. Four in five consumers (79 per cent) would provide personal data, including income, location and lifestyle habits, to their insurer if they believe it would help reduce the possibility of injury or loss.

In banking, 81 per cent of consumers would be willing to share income, location and lifestyle habit data for rapid loan approval, and 76 per cent would do so to receive personalized offers based on their location, such as discounts from a retailer. Approximately two-fifths (42 per cent) of Canadian consumers specifically, want their bank to provide updates on how much money they have based on spending that month and 46 per cent want savings tips based on their spending habits.  

Appetite for data sharing differs around the world

Appetite for sharing significant personal data with financial firms was highest in China, with 67 per cent of consumers there willing to share more data for personalized services. Half (50 per cent) of consumers in the U.S. said they were willing to share more data for personalized services, and in Europe — where the General Data Protection Regulation took effect in May — consumers were more skeptical. For instance, only 40 per cent of consumers in both the U.K. and Germany said they would be willing to share more data with banks and insurers in return for personalized services…(More)”,

Privacy’s not dead. It’s just not evenly distributed


Alex Pasternack in Fast Company: “In the face of all the data abuse, many of us have, quite reasonably, thrown up our hands. But privacy didn’t die. It’s just been beaten up, sold, obscured, diffused unevenly across society. What privacy is and why it matters increasingly depends upon who you are, your age, your income, gender, ethnicity, where you’re from, and where you live. To borrow William Gibson’s famous quote about the future and its unevenness and inequalities, privacy is alive—it’s just not evenly distributed. And while we don’t all care about it the same way—we’re even divided on what exactly privacy is—its harms are still real. Even when our own privacy isn’t violated, privacy violations can still hurt us.

Privacy is personal, from the creepy feeling that our phones are literally listening to the endless parade of data breaches that test our ability to care anymore. It’s the unsettling feeling of giving “consent” without knowing what that means, “agreeing” to contracts we didn’t read with companies we don’t really trust. (Forget about understanding all the details; researchers have shown that most privacy policies surpass the reading level of the average person.)

It’s the data about us that’s harvested, bought, sold, and traded by an obscure army of data brokers without our knowledge, feeding marketers, landlords, employers, immigration officialsinsurance companies, debt collectors, as well as stalkers and who knows who else. It’s the body camera or the sports arena or the social network capturing your face for who knows what kind of analysis. Don’t think of personal data as just “data.” As it gets more detailed and more correlated, increasingly, our data is us.

And “privacy” isn’t just privacy. It’s also tied up with security, freedom, social justice, free speech, and free thought. Privacy harms aren’t only personal, but societal. It’s not just the multibillion-dollar industry that aims to nab you and nudge you, but the multibillion-dollar spyware industry that helps governments nab dissidents and send them to prison or worse. It’s the supposedly fair and transparent algorithms that aren’t, turning our personal data into risk scores that can help perpetuate race, class, and gender divides, often without our knowing it.

Privacy is about dark ads bought with dark money and the micro-targeting of voters by overseas propagandists or by political campaigns at home. That kind of influence isn’t just the promise of a shadowy Cambridge Analytica or state-run misinformation campaigns, but also the premise of modern-day digital ad campaigns. (Note that Facebook’s research division later hired one of the researchers behind the Cambridge app.) And as the micro-targeting gets more micro, the tech giants that deal in ads are only getting more macro….(More)”

(This story is part of The Privacy Divide, a series that explores the fault lines and disparities–economic, cultural, philosophical–that have developed around digital privacy and its impact on society.)

A Skeptical View of Information Fiduciaries


Paper by Lina Khan and David Pozen: “The concept of “information fiduciaries” has surged to the forefront of debates on online platform regulation. Developed by Professor Jack Balkin, the concept is meant to rebalance the relationship between ordinary individuals and the digital companies that accumulate, analyze, and sell their personal data for profit. Just as the law imposes special duties of care, confidentiality, and loyalty on doctors, lawyers, and accountants vis-à-vis their patients and clients, Balkin argues, so too should it impose special duties on corporations such as Facebook, Google, and Twitter vis-à-vis their end users. Over the past several years, this argument has garnered remarkably broad support and essentially zero critical pushback.

This Essay seeks to disrupt the emerging consensus by identifying a number of lurking tensions and ambiguities in the theory of information fiduciaries, as well as a number of reasons to doubt the theory’s capacity to resolve them satisfactorily. Although we agree with Balkin that the harms stemming from dominant online platforms call for legal intervention, we question whether the concept of information fiduciaries is an adequate or apt response to the problems of information insecurity that he stresses, much less to more fundamental problems associated with outsized market share and business models built on pervasive surveillance. We also call attention to the potential costs of adopting an information-fiduciary framework—a framework that, we fear, invites an enervating complacency toward online platforms’ structural power and a premature abandonment of more robust visions of public regulation….(More)”.

The Technology Trap: Capital, Labor, and Power in the Age of Automation


Book by Carl Benedikt Frey: “From the Industrial Revolution to the age of artificial intelligence, The Technology Trap takes a sweeping look at the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. As Carl Benedikt Frey shows, the Industrial Revolution created unprecedented wealth and prosperity over the long run, but the immediate consequences of mechanization were devastating for large swaths of the population. Middle-income jobs withered, wages stagnated, the labor share of income fell, profits surged, and economic inequality skyrocketed. These trends, Frey documents, broadly mirror those in our current age of automation, which began with the Computer Revolution.

Just as the Industrial Revolution eventually brought about extraordinary benefits for society, artificial intelligence systems have the potential to do the same. But Frey argues that this depends on how the short term is managed. In the nineteenth century, workers violently expressed their concerns over machines taking their jobs. The Luddite uprisings joined a long wave of machinery riots that swept across Europe and China. Today’s despairing middle class has not resorted to physical force, but their frustration has led to rising populism and the increasing fragmentation of society. As middle-class jobs continue to come under pressure, there’s no assurance that positive attitudes to technology will persist.
The Industrial Revolution was a defining moment in history, but few grasped its enormous consequences at the time. The Technology Trap demonstrates that in the midst of another technological revolution, the lessons of the past can help us to more effectively face the present….(More)”.

How AI Can Cure the Big Idea Famine


Saahil Jayraj Dama at JoDS: “Today too many people are still deprived of basic amenities such as medicine, while current patent laws continue to convolute and impede innovation. But if allowed, AI can provide an opportunity to redefine this paradigm and be the catalyst for change—if….

Which brings us to the most befitting answer: No one owns the intellectual property rights to AI-generated creations, and these creations fall into the public domain. This may seem unpalatable at first, especially since intellectual property laws have played such a fundamental role in our society so far. We have been conditioned to a point where it seems almost unimaginable that some creations should directly enter the public domain upon their birth.

But, doctrinally, this is the only proposition that stays consistent to extant intellectual property laws. Works created by AI have no rightful owner because the application of mind to generate the creation, along with the actual generation of the creation, would entirely be done by the AI system. Human involvement is ancillary and is limited to creating an environment within which such a creation can take form.

This can be better understood through a hypothetical example: If an AI system were to invent a groundbreaking pharmaceutical ingredient which completely treats balding, then the system would likely begin by understanding the problem and state of prior art. It would undertake research on causes of balding, existing cures, problems with existing cures, and whether its proposed cure would have any harmful side effects. It would also possibly combine research and knowledge across various domains, which could range from Ayurveda to modern-day biochemistry, before developing its invention.

The developer can lay as much stake to this invention as the team behind AlphaGo for beating Lee Sedol at Go. The user is even further detached from the exercise of ingenuity: She would be the person who first thought, “We should build a Go playing AI system,” and direct the AI system to learn Go by watching certain videos and playing against itself. Despite the intervention of all these entities, the fact remains that the victory only belongs to AlphaGo itself.

Doctrinal issues aside, this solution ties in with what people need from intellectual property laws: more openness and accessibility. The demands for improved access to medicines and knowledge, fights against cultural monopolies, and brazen violations of unjust intellectual property laws are all symptomatic of the growing public discontent against strong intellectual property laws. Through AI, we can design legal systems which address these concerns and reform the heavy handed approach that has been adopted toward intellectual property rights so far.

Tying the Threads Together

For the above to materialize, governments and legislators need to accept that our present intellectual property system is broken and inconsistent with what people want. Too many people are being deprived of basic amenities such as medicines, patent trolls and patent thickets are slowing innovation, educational material is still outside the reach of most people, and culture is not spreading as widely as it should. AI can provide an opportunity for us to redefine this paradigm—it can lead to a society that draws and benefits from an enriched public domain.

However, this approach does come with built-in cynicism because it contemplates an almost complete overhaul of the system. One could argue that if open access for AI-generated creations does become the norm, then innovation and creativity would suffer as people would no longer have the incentive to create. People may even refuse to use their AI systems, and instead stick to producing inventions and creative works by themselves. This would be detrimental to scientific and cultural progress and would also slow adoption of AI systems in society.

Yet, judging by the pace at which these systems have progressed so far and what they can currently do, it is easy to imagine a reality where humans developing inventions and producing creative works almost becomes an afterthought. If a machine can access all the world’s publicly available knowledge and information to develop an invention, or study a user’s likes and dislikes while producing a new musical composition, it is easy to see how humans would, eventually, be pushed out of the loop. AI-generated creations are, thus, inevitable.

The incentive theory will have to be reimagined, too. Constant innovation coupled with market forces will change the system from “incentive-to-create” to “incentive-to-create-well.” While every book, movie, song, and invention is treated at par under the law, only the best inventions and creative works will thrive under the new model. If a particular developer’s AI system can write incredible dialogue for a comedy film or invent the most efficient car engines, the market would want more of these AI systems. Thus incentive will not be eliminated, it will just take a different form.

It is true that writing about such grand schemes is significantly tougher than practically implementing them. But, for any idea to succeed, it must start with a discussion such as this one. Admittedly, we are still a moonshot away from any country granting formal recognition to open access as the basis of its intellectual property laws. And even if a country were to do this, it faces a plethora of hoops to jump through, such as conducting feasibility-testing and dealing with international and internal pressure. Despite these issues, facilitating better access through AI systems remains an objective worth achieving for any society that takes pride in being democratic and equal….(More)”.

Civic Tech for Civic Engagement


Blog Post by Jason Farra: “When it came to gathering input for their new Environmental Master Plan, the Town of Okotoks, AB decided to try something different. Rather than using more traditional methods of consulting residents, they turned to a Canadian civic tech company called Ethelo.

Ethelo’s online software “enables groups to evaluate scenarios, apply constraints, prioritize options and come up with decisions that will get broad support from the group,” says John Richardson, the company’s CEO and founder.

Okotoks gathered over 350 responses, with residents able to compare and evaluate different solutions for a variety of environmental issues, including what kind of transportation and renewable energy options they wanted to see in their town.

One of the options presented to Okotoks residents in the online engagement site for the town’s Environmental Master Plan.

“Ethelo offered a different opportunity in terms of allowing a conversation to happen online,” Marni Hutchison, Communications Specialist with the Town of Okotoks, said in a case study of the project. “We can see the general consensus as it’s forming and participants have more opportunities to see different perspectives.”

John sees this as part of a broader shift in how governments and other organizations are approaching stakeholder engagement, particularly with groups like IAP2 working to improve engagement practices by training practitioners.

Rather than simply consulting, then informing residents about decisions, civic tech startups like Ethelo allow governments to involve residents more actively in the actual decision-making process….(More)”.

What Would More Democratic A.I. Look Like?


Blog post by Andrew Burgess: “Something curious is happening in Finland. Though much of the global debate around artificial intelligence (A.I.) has become concerned with unaccountable, proprietary systems that could control our lives, the Finnish government has instead decided to embrace the opportunity by rolling out a nationwide educational campaign.

Conceived in 2017, shortly after Finland’s A.I. strategy was announced, the government wants to rebuild the country’s economy around the high-end opportunities of artificial intelligence, and has launched a national programto train 1 percent of the population — that’s 55,000 people — in the basics of A.I. “We’ll never have so much money that we will be the leader of artificial intelligence,” said economic minister Mika Lintilä at the launch. “But how we use it — that’s something different.”

Artificial intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields, and improve traffic efficiency.

But some believe that A.I. expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted away from projects that could be more socially, rather than commercially, beneficial. Finland’s approach of making A.I. accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit.

This shift toward “democratic A.I.” has three main principles: that all society will be impacted by A.I. and therefore its creators have a responsibility to build open, fair, and explainable A.I. services; that A.I. should be used for social benefit and not just for private profit; and that because A.I. learns from vast quantities of data, the citizens who create that data — about their shopping habits, health records, or transport needs — have a right to say and understand how it is used.

A growing movement across industry and academia believes that A.I. needs to be treated like any other “public awareness” program — just like the scheme rolled out in Finland….(More)”.

How Effective Is Nudging? A Quantitative Review on the Effect Sizes and Limits of Empirical Nudging Studies


Paper by Dennis Hummel and Alexander Maedche: “Changes in the choice architecture, so-called nudges, have been employed in a variety of contexts to alter people’s behavior. Although nudging has gained a widespread popularity, the effect sizes of its influences vary considerably across studies. In addition, nudges have proven to be ineffective or even backfire in selected studies which raises the question whether, and under which conditions, nudges are effective. Therefore, we conduct a quantitative review on nudging with 100 primary publications including 317 effect sizes from different research areas. We derive four key results. (1) A morphological box on nudging based on eight dimensions, (2) an assessment of the effectiveness of different nudging interventions, (3) a categorization of the relative importance of the application context and the nudge category, and (4) a comparison of nudging and digital nudging. Thereby, we shed light on the (in)effectiveness of nudging and we show how the findings of the past can be used for future research. Practitioners, especially government officials, can use the results to review and adjust their policy making….(More)”.

Our data, our society, our health: a vision for inclusive and transparent health data science in the UK and Beyond


Paper by Elizabeth Ford et al in Learning Health Systems: “The last six years have seen sustained investment in health data science in the UK and beyond, which should result in a data science community that is inclusive of all stakeholders, working together to use data to benefit society through the improvement of public health and wellbeing.

However, opportunities made possible through the innovative use of data are still not being fully realised, resulting in research inefficiencies and avoidable health harms. In this paper we identify the most important barriers to achieving higher productivity in health data science. We then draw on previous research, domain expertise, and theory, to outline how to go about overcoming these barriers, applying our core values of inclusivity and transparency.

We believe a step-change can be achieved through meaningful stakeholder involvement at every stage of research planning, design and execution; team-based data science; as well as harnessing novel and secure data technologies. Applying these values to health data science will safeguard a social license for health data research, and ensure transparent and secure data usage for public benefit….(More)”.

Transparency, Fairness, Data Protection, Neutrality: Data Management Challenges in the Face of New Regulation


Paper by Serge Abiteboul and Julia Stoyanovich: “The data revolution continues to transform every sector of science, industry and government. Due to the incredible impact of data-driven technology on society, we are becoming increasingly aware of the imperative to use data and algorithms responsibly — in accordance with laws and ethical norms. In this article we discuss three recent regulatory frameworks: the European Union’s General Data Protection Regulation (GDPR), the New York City Automated Decisions Systems (ADS) Law, and the Net Neutrality principle, that aim to protect the rights of individuals who are impacted by data collection and analysis. These frameworks are prominent examples of a global trend: Governments are starting to recognize the need to regulate data-driven algorithmic technology. 


Our goal in this paper is to bring these regulatory frameworks to the attention of the data management community, and to underscore the technical challenges they raise and which we, as a community, are well-equipped to address. The main .take-away of this article is that legal and ethical norms cannot be incorporated into data-driven systems as an afterthought. Rather, we must think in terms of responsibility by design, viewing it as a systems requirement….(More)”