Uninformed Consent


Leslie K. John at Harvard Business Review: “…People are bad at making decisions about their private data. They misunderstand both costs and benefits. Moreover, natural human biases interfere with their judgment. And whether by design or accident, major platform companies and data aggregators have structured their products and services to exploit those biases, often in subtle ways.

Impatience. People tend to overvalue immediate costs and benefits and underweight those that will occur in the future. They want $9 today rather than $10 tomorrow. On the internet, this tendency manifests itself in a willingness to reveal personal information for trivial rewards. Free quizzes and surveys are prime examples. …

The endowment effect. In theory people should be willing to pay the same amount to buy a good as they’d demand when selling it. In reality, people typically value a goodless when they have to buy it. A similar dynamic can be seen when people make decisions about privacy….

Illusion of control. People share a misapprehension that they can control chance processes. This explains why, for example, study subjects valued lottery tickets that they had personally selected more than tickets that had been randomly handed to them. People also confuse the superficial trappings of control with real control….

Desire for disclosure. This is not a decision-making bias. Rather, humans have what appears to be an innate desire, or even need, to share with others. After all, that’s how we forge relationships — and we’re inherently social creatures…

False sense of boundaries. In off-line contexts, people naturally understand and comply with social norms about discretion and interpersonal communication. Though we may be tempted to gossip about someone, the norm “don’t talk behind people’s backs” usually checks that urge. Most of us would never tell a trusted confidant our secrets when others are within earshot. And people’s reactions in the moment can make us quickly scale back if we disclose something inappropriate….(More)”.

How AI Addresses Unconscious Bias in the Talent Economy


Announcement by Bob Schultz at IBM: “The talent economy is one of the great outcomes of the digital era — and the ability to attract and develop the right talent has become a competitive advantage in most industries. According to a recent IBM study, which surveyed over 2,100 Chief Human Resource Officers, 33 percent of CHROs believe AI will revolutionize the way they do business over the next few years. In that same study, 65 percent of CEOs expect that people skills will have a strong impact on their businesses over the next several years. At IBM, we see AI as a tremendous untapped opportunity to transform the way companies attract, develop, and build the workforce for the decades ahead.

Consider this: The average hiring manager has hundreds of applicants a day for key positions and spends approximately six seconds on each resume. The ability to make the right decision without analytics and AI’s predictive abilities is limited and has the potential to create unconscious bias in hiring.

That is why today, I am pleased to announce the rollout of IBM Watson Recruitment’s Adverse Impact Analysis capability, which identifies instances of bias related to age, gender, race, education, or previous employer by assessing an organization’s historical hiring data and highlighting potential unconscious biases. This capability empowers HR professionals to take action against potentially biased hiring trends — and in the future, choose the most promising candidate based on the merit of their skills and experience alone. This announcement is part of IBM’s largest ever AI toolset release, tailor made for nine industries and professions where AI will play a transformational role….(More)”.

The role of corporations in addressing AI’s ethical dilemmas


Darrell M. West at Brookings: “In this paper, I examine five AI ethical dilemmas: weapons and military-related applications, law and border enforcement, government surveillance, issues of racial bias, and social credit systems. I discuss how technology companies are handling these issues and the importance of having principles and processes for addressing these concerns. I close by noting ways to strengthen ethics in AI-related corporate decisions.

Briefly, I argue it is important for firms to undertake several steps in order to ensure that AI ethics are taken seriously:

  1. Hire ethicists who work with corporate decisionmakers and software developers
  2. Develop a code of AI ethics that lays out how various issues will be handled
  3. Have an AI review board that regularly addresses corporate ethical questions
  4. Develop AI audit trails that show how various coding decisions have been made
  5. Implement AI training programs so staff operationalizes ethical considerations in their daily work, and
  6. Provide a means for remediation when AI solutions inflict harm or damages on people or organizations….(More)”.

How Insurance Companies Used Bad Science to Discriminate


Jessie Wright-Mendoza at JStor: “After the Civil War, the United States searched for ways to redefine itself. But by the 1880’s, the hopes of Reconstruction had dimmed. Across the United States there was instead a push to formalize and legalize discrimination against African-Americans. The effort to marginalize the first generation of free black Americans infiltrated nearly every aspect of daily life, including the cost of insurance.

Initially, African-Americans could purchase life insurance policies on equal footing with whites. That all changed in 1881. In March of that year Prudential, one of the country’s largest insurers, announced that policies held by black adults would be worth one-third less than the same plans held by whites. Their weekly premiums would remain the same. Benefits for black children didn’t change, but weekly premiums for their policies would rise by five cents.

Prudential defended the decision by pointing out that the black mortality rate was higher than the white mortality rate. Therefore, they explained, claims paid out for black policyholders were a disproportionate amount of all payouts. Most of the major life insurance companies followed suit, making it nearly impossible for African-Americans to gain coverage. Across the industry, companies blocked agents from soliciting African-American customers and denied commission for any policies issued to blacks.

The public largely accepted the statistical explanation for unequal coverage. The insurer’s job was to calculate risk. Race was merely another variable like occupation or geographic location. As one trade publication put it in 1891: “Life insurance companies are not negro-maniacs, they are business institutions…there is no sentiment and there are no politics in it.”

Companies considered race-based risk the same for all African-Americans, whether they were strong or sickly, educated or uneducated, from the country or the city. The “science” behind the risk formula is credited to Prudential statistician Frederick L. Hoffman, whose efforts to prove the genetic inferiority of the black race were used to justify the company’s discriminatory policies….(More)”.

Data-Driven Government: The Role of Chief Data Officers


Jane Wiseman for IBM Center for The Business of Government: “Governments at all levels have seen dramatic increases in availability and use of data over the past decade.

The push for data-driven government is currently of intense interest at the federal level as it develops an integrated federal data strategy as part of its goal to “leverage data as a strategic asset.” There is also pending legislation to require agencies to designate chief data officers (CDOs).

This report focuses on the expanding use of data at the federal level and how to best manage it. Ms. Wiseman says: “The purpose of this report is to advance the use of data in government by describing the work of pioneering federal CDOs and providing a framework for thinking about how a new analytics leader might establish his or her office and use data to advance the mission of the agency.”

Ms. Wiseman’s report provides rich profiles of five pioneering CDOs in the federal government and how they have defined their new roles. Based on her research and interviews, she offers insights into how the role of agency CDOs is evolving in different agencies and the reasons agency leaders are establishing these roles.  She also offers advice on how new CDOs can be successful at the federal level, based on the experiences of the pioneers as well as the experiences of state and local CDOs….(More)”.

To Secure Knowledge: Social Science Partnerships for the Common Good


Social Science Research Council: “For decades, the social sciences have generated knowledge vital to guiding public policy, informing business, and understanding and improving the human condition. But today, the social sciences face serious threats. From dwindling federal funding to public mistrust in institutions to widespread skepticism about data, the infrastructure supporting the social sciences is shifting in ways that threaten to undercut research and knowledge production.

How can we secure social knowledge for future generations?

This question has guided the Social Science Research Council’s Task Force. Following eighteen months of consultation with key players as well as internal deliberation, we have identified both long-term developments and present threats that have created challenges for the social sciences, but also created unique opportunities. And we have generated recommendations to address these issues.

Our core finding focuses on the urgent need for new partnerships and collaborations among several key players: the federal government, academic institutions, donor organizations, and the private sector. Several decades ago, these institutions had clear zones of responsibility in producing social knowledge, with the federal government constituting the largest portion of funding for basic research. Today, private companies represent an increasingly large share not just of research and funding, but also the production of data that informs the social sciences, from smart phone usage to social media patterns.

In addition, today’s social scientists face unprecedented demands for accountability, speedy publication, and generation of novel results. These pressures have emerged from the fragmented institutional foundation that undergirds research. That foundation needs a redesign in order for the social sciences to continue helping our communities address problems ranging from income inequality to education reform.

To build a better future, we identify five areas of action: Funding, Data, Ethics, Research Quality, and Research Training. In each area, our recommendations range from enlarging corporate-academic pilot programs to improving social science training in digital literacy.

A consistent theme is that none of the measures, if taken unilaterally, can generate optimal outcomes. Instead, we have issued a call to forge a new research compact to harness the potential of the social sciences for improving human lives. That compact depends on partnerships, and we urge the key players in the construction of social science knowledge—including universities, government, foundations, and corporations—to act swiftly. With the right realignments, the security of social knowledge lies within our reach….(More)”

Government for the Future Reflection and Vision for Tomorrow’s Leaders


Book by Mark A. Abramson, Daniel J. Chenok and John M. Kamensky: “In recognition of its 20th anniversary, The IBM Center for the Business of Government offers a retrospective of the most significant changes in government management during that period and looks forward over the next 20 years to offer alternative scenarios as to what government management might look like by the year 2040.

Part I will discuss significant management improvements in the federal government over the past 20 years, based in part on a crowdsourced survey of knowledgeable government officials and public administration experts in the field. It will draw on themes and topics examined in the 350 IBM Center reports published over the past two decades. Part II will outline alternative scenarios of how government might change over the coming 20 years. The scenarios will be developed based on a series of envisioning sessions which are bringing together practitioners and academics to examine the future. The scenarios will be supplemented with short essays on various topics. Part II will also include essays by winners of the Center’s Challenge Grant competition. Challenge Grant winners will be awarded grants to identify futuristic visions of government in 2040….(More)”.

The Use of Regulatory Sandboxes in Europe and Asia


Claus Christensen at Regulation Aisa: “Global attention to money-laundering, terrorism financing and financial criminal practices has grown exponentially in recent years. As criminals constantly come up with new tactics, global regulations in the financial world are evolving all the time to try and keep up. At the same time, end users’ expectations are putting companies at commercial risk if they are not prepared to deliver outstanding and digital-first customer experiences through innovative solutions.

Among the many initiatives introduced by global regulators to address these two seemingly contradictory needs, regulatory sandboxes – closed environments that allow live testing of innovations by tech companies under the regulator’s supervision – are by far one of the most popular. As the CEO of a fast-growing regtech company working across both Asia and Europe, I have identified a few differences in how the regulators across different jurisdictions are engaging with the industry in general, and regulatory sandboxes in particular.

Since the launch of ‘Project Innovate’ in 2014, the UK’s FCA (Financial Conduct Authority) has won recognition for the success of its sandbox, where fintech companies can test innovative products, services and business models in a live market environment, while ensuring that appropriate safeguards are in place through temporary authorisation. The FCA advises companies, whether fintech startups or established banks, on which existing regulations might apply to their cutting-edge products.

So far, the sandbox has helped more than 500 companies, with 40+ firms receiving regulatory authorisation. Project Innovate has helped the FCA’s reputation for supporting initiatives which boost competition within financial services, which was part of the regulator’s post-financial crisis agenda. The success of the initiative in fostering a fertile fintech environment is reflected by the growing number of UK-based challenger banks that are expanding their client bases across Europe. Following its success, the sandbox approach has gone global, with regulators around the world adopting a similar strategy for fintech innovation.

Across Europe, regulators are directly working with financial services providers and taking proactive measures to not only encourage the use of innovative technology in improving their systems, but also to boost adoption by others within the ecosystem…(More)”.

Technology Run Amok: Crisis Management in the Digital Age


Book by Ian I. Mitroff: “The recent data controversy with Facebook highlights the tech industry as a whole was utterly unprepared for the backlash it faced as a result of its business model of selling user data to third parties. Despite the predominant role that technology plays in all of our lives, the controversy also revealed that many tech companies are reactive, rather than proactive, in addressing crises.

This book examines society’s failure to manage technology and its resulting negative consequences. Mitroff argues that the “technological mindset” is responsible for society’s unbridled obsession with technology and unless confronted, will cause one tech crisis after another. This trans-disciplinary text, edgy in its approach, will appeal to academics, students, and practitioners through its discussion of the modern technological crisis…(More)”.

How Smart Should a City Be? Toronto Is Finding Out


Laura Bliss at CityLab: “A data-driven “neighborhood of the future” masterminded by a Google corporate sibling, the Quayside project could be a milestone in digital-age city-building. But after a year of scandal in Silicon Valley, questions about privacy and security remain…

Quayside was billed as “the world’s first neighborhood built from the internet up,” according to Sidewalk Labs’ vision plan, which won the RFP to develop this waterfront parcel. The startup’s pitch married “digital infrastructure” with an utopian promise: to make life easier, cheaper, and happier for Torontonians.

Everything from pedestrian traffic and energy use to the fill-height of a public trash bin and the occupancy of an apartment building could be counted, geo-tagged, and put to use by a wifi-connected “digital layer” undergirding the neighborhood’s physical elements. It would sense movement, gather data, and send information back to a centralized map of the neighborhood. “With heightened ability to measure the neighborhood comes better ways to manage it,” stated the winning document. “Sidewalk expects Quayside to become the most measurable community in the world.”

“Smart cities are largely an invention of the private sector—an effort to create a market within government,” Wylie wrote in Canada’s Globe and Mail newspaper in December 2017. “The business opportunities are clear. The risks inherent to residents, less so.” A month later, at a Toronto City Council meeting, Wylie gave a deputation asking officials to “ensure that the data and data infrastructure of this project are the property of the city of Toronto and its residents.”

In this case, the unwary Trojans would be Waterfront Toronto, the nonprofit corporation appointed by three levels of Canadian government to own, manage, and build on the Port Lands, 800 largely undeveloped acres between downtown and Lake Ontario. When Waterfront Toronto gave Sidewalk Labs a green light for Quayside in October, the startup committed $50 million to a one-year consultation, which was recently extended by several months. The plan is to submit a final “Master Innovation and Development Plan” by the end of this year.

That somewhat Orwellian vision of city management had privacy advocates and academics concerned from the the start. Bianca Wylie, the co-founder of the technology advocacy group Tech Reset Canada, has been perhaps the most outspoken of the project’s local critics. For the last year, she’s spoken up at public fora, written pointed op-edsand Medium posts, and warned city officials of what she sees as the “Trojan horse” of smart city marketing: private companies that stride into town promising better urban governance, but are really there to sell software and monetize citizen data.

But there has been no guarantee about who would own the data at the core of its proposal—much of which would ostensibly be gathered in public space. Also unresolved is the question of whether this data could be sold. With little transparency about what that means from the company or its partner, some Torontonians are wondering what Waterfront Toronto—and by extension, the public—is giving away….(More)”.