The law and ethics of big data analytics: A new role for international human rights in the search for global standards


David Nersessian at Business Horizons: “The Economist recently declared that digital information has overtaken oil as the world’s most valuable commodity. Big data technology is inherently global and borderless, yet little international consensus exists over what standards should govern its use. One source of global standards benefitting from considerable international consensus might be used to fill the gap: international human rights law.

This article considers the extent to which international human rights law operates as a legal or ethical constraint on global commercial use of big data technologies. By providing clear baseline standards that apply worldwide, human rights can help shape cultural norms—implemented as ethical practices and global policies and procedures—about what businesses should do with their information technologies. In this way, human rights could play a broad and important role in shaping business thinking about the proper handling of this increasingly valuable commodity in the modern global society…(More)”.

The latest tools for sexual assault victims: Smartphone apps and software


 
Peter Holley at the Washington Post:  “…For much of the past decade, dozens of apps and websites have been created to help survivors of sexual assault electronically record and report such crimes. They are designed to assist an enormous pool of potential victims. The Rape Abuse & Incest National Network reports that more than 11 percent of all college students — both graduate and undergraduate — experience rape or sexual assault through physical force, violence or incapacitation. Despite the prevalence of such incidents, less than 10 percent of victims on college campuses report their assaults, according to the National Sexual Violence Resource Center.

The apps range from electronic reporting tools such as JDoe to legal guides that provide victims with access to law enforcement and crisis counseling. Others help victims save and share relevant medical information in case of an assault. The app Uask includes a “panic button” that connects users with 911 or allows them to send emergency messages to people with their location.

 

Since its debut in 2015, Callisto’s software has been adopted by 12 college campuses — including Stanford, the University of Oregon and St. John’s University — and made available to more than 160,000 students, according to the company. Sexual assault survivors who visit Callisto are six times as likely to report, and 15 percent of those survivors have matched with another victim of the same assailant, the company claims.

Peter Cappelli, a professor of management at the Wharton School and director of Wharton’s Center for Human Resources, told NPR that he sees potential problems with survivors “crowdsourcing” their decision to report assaults.

“I don’t think we want to have a standard where the decisions are crowdsourced,” he said. “I think what you want is to tell people [that] the criteria [for whether or not to report] are policy related, not personally related, and you should bring forward anything that fits the criteria, not [based on] whether you feel enough other people have made the complaint or not. We want to sometimes encourage people to do things they might feel uncomfortable about.”…(More)”.

Creative Placemaking and Community Safety: Synthesizing Cross-Cutting Themes


Mark Treskon, Sino Esthappan, Cameron Okeke and Carla Vasquez-Noriega at the Urban Institute: “This report synthesizes findings from four cases where stakeholders are using creative placemaking to improve community safety. It presents cross-cutting themes from these case studies to show how creative placemaking techniques can be used from the conception and design stage through construction and programming, and how they can build community safety by promoting empathy and understanding, influencing law and policy, providing career opportunities, supporting well-being, and advancing the quality of place. It also discusses implementation challenges, and presents evaluative techniques of particular relevance for stakeholders working to understand the effects of these programs….(More)”.

Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet


We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.

Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms — including:

  • Real-time and archived information about targeted political advertising;
  • Clear accountability for the social impact of automated decision-making;
  • Explicit indicators for the presence of non-human accounts in digital media.

Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized — especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:

  • Consumer control over data through stronger rights to access and removal;
  • Transparency for the user of the full extent of data usage and meaningful consent;
  • Stronger enforcement with resources and authority for agency rule-making.

Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:

  • Stronger oversight of mergers and acquisitions;
  • Antitrust reform including new enforcement regimes, levies, and essential services regulation;
  • Robust data portability and interoperability between services.

There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. … Awareness and education are the first steps toward organizing and action to build a new social contract for digital democracy….(More)”

The role of corporations in addressing AI’s ethical dilemmas


Darrell M. West at Brookings: “In this paper, I examine five AI ethical dilemmas: weapons and military-related applications, law and border enforcement, government surveillance, issues of racial bias, and social credit systems. I discuss how technology companies are handling these issues and the importance of having principles and processes for addressing these concerns. I close by noting ways to strengthen ethics in AI-related corporate decisions.

Briefly, I argue it is important for firms to undertake several steps in order to ensure that AI ethics are taken seriously:

  1. Hire ethicists who work with corporate decisionmakers and software developers
  2. Develop a code of AI ethics that lays out how various issues will be handled
  3. Have an AI review board that regularly addresses corporate ethical questions
  4. Develop AI audit trails that show how various coding decisions have been made
  5. Implement AI training programs so staff operationalizes ethical considerations in their daily work, and
  6. Provide a means for remediation when AI solutions inflict harm or damages on people or organizations….(More)”.

The Cost-Benefit Revolution


Book by Cass Sunstein: “Why policies should be based on careful consideration of their costs and benefits rather than on intuition, popular opinion, interest groups, and anecdotes.

Opinions on government policies vary widely. Some people feel passionately about the child obesity epidemic and support government regulation of sugary drinks. Others argue that people should be able to eat and drink whatever they like. Some people are alarmed about climate change and favor aggressive government intervention. Others don’t feel the need for any sort of climate regulation. In The Cost-Benefit Revolution, Cass Sunstein argues our major disagreements really involve facts, not values. It follows that government policy should not be based on public opinion, intuitions, or pressure from interest groups, but on numbers—meaning careful consideration of costs and benefits. Will a policy save one life, or one thousand lives? Will it impose costs on consumers, and if so, will the costs be high or negligible? Will it hurt workers and small businesses, and, if so, precisely how much?

As the Obama administration’s “regulatory czar,” Sunstein knows his subject in both theory and practice. Drawing on behavioral economics and his well-known emphasis on “nudging,” he celebrates the cost-benefit revolution in policy making, tracing its defining moments in the Reagan, Clinton, and Obama administrations (and pondering its uncertain future in the Trump administration). He acknowledges that public officials often lack information about costs and benefits, and outlines state-of-the-art techniques for acquiring that information. Policies should make people’s lives better. Quantitative cost-benefit analysis, Sunstein argues, is the best available method for making this happen—even if, in the future, new measures of human well-being, also explored in this book, may be better still…(More)”.

Constitutional Democracy and Technology in the age of Artificial Intelligence


Paul Nemitz at Royal Society Philosophical Transactions: “Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless internet and the relationship between technology and the law as it has developed in the internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of Democracy, Rule of law and Human Rights by design in AI and a three level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More).

Don’t Believe the Algorithm


Hannah Fry at the Wall Street Journal: “The Notting Hill Carnival is Europe’s largest street party. A celebration of black British culture, it attracts up to two million revelers, and thousands of police. At last year’s event, the Metropolitan Police Service of London deployed a new type of detective: a facial-recognition algorithm that searched the crowd for more than 500 people wanted for arrest or barred from attending. Driving around in a van rigged with closed-circuit TVs, the police hoped to catch potentially dangerous criminals and prevent future crimes.

It didn’t go well. Of the 96 people flagged by the algorithm, only one was a correct match. Some errors were obvious, such as the young woman identified as a bald male suspect. In those cases, the police dismissed the match and the carnival-goers never knew they had been flagged. But many were stopped and questioned before being released. And the one “correct” match? At the time of the carnival, the person had already been arrested and questioned, and was no longer wanted.

Given the paltry success rate, you might expect the Metropolitan Police Service to be sheepish about its experiment. On the contrary, Cressida Dick, the highest-ranking police officer in Britain, said she was “completely comfortable” with deploying such technology, arguing that the public expects law enforcement to use cutting-edge systems. For Dick, the appeal of the algorithm overshadowed its lack of efficacy.

She’s not alone. A similar system tested in Wales was correct only 7% of the time: Of 2,470 soccer fans flagged by the algorithm, only 173 were actual matches. The Welsh police defended the technology in a blog post, saying, “Of course no facial recognition system is 100% accurate under all conditions.” Britain’s police force is expanding the use of the technology in the coming months, and other police departments are following suit. The NYPD is said to be seeking access to the full database of drivers’ licenses to assist with its facial-recognition program….(More).

The UK’s Gender Pay Gap Open Data Law Has Flaws, But Is A Positive Step Forward


Article by Michael McLaughlin: “Last year, the United Kingdom enacted a new regulation requiring companies to report information about their gender pay gap—a measure of the difference in average pay between men and women. The new rules are a good example of how open data can drive social change. However, the regulations have produced some misleading statistics, highlighting the importance of carefully crafting reporting requirements to ensure that they produce useful data.

In the UK, nearly 11,000 companies have filed gender pay gap reports, which include both the difference between the mean and median hourly pay rates for men and women as well the difference in bonuses. And the initial data reveals several interesting findings. Median pay for men is 11.8 percent higher than for women, on average, and nearly 87 percent of companies pay men more than women on average. In addition, over 1,000 firms had a median pay gap greater than 30 percent. The sectors with the highest pay gaps—construction, finance, and insurance—each pay men at least 20 percent more than women. A major reason for the gap is a lack of women in senior positions—UK women actually make more than men between the ages of 22-29. The total pay gap is also a result of more women holding part-time jobs.

However, as detractors note, the UK’s data can be misleading. For example, the data overstates the pay gap on bonuses because it does not adjust these figures for hours worked. More women work part-time than men, so it makes sense that women would receive less in bonus pay when they work less. The data also understates the pay gap because it excludes the high compensation of partners in organizations such as law firms, a group that includes few women. And it is important to note that—by definition—the pay gap data does not compare the wages of men and women working the same jobs, so the data says nothing about whether women receive equal pay for equal work.

Still, publication of the data has sparked an important national conversation. Google searches in the UK for the phrase “gender pay gap” experienced a 12-month high the week the regulations began enforcement, and major news sites like Financial Times have provided significant coverage of the issue by analyzing the reported data. While it is too soon to tell if the law will change employer behavior, such as businesses hiring more female executives, or employee behavior, such as women leaving companies or fields that pay less, countries with similar reporting requirements, such as Belgium, have seen the pay gap narrow following implementation of their rules.

Requiring companies to report this data to the government may be the only way to obtain gender pay gap data, because evidence suggests that the private sector will not produce this data on its own. Only 300 UK organizations joined a voluntary government program to report their gender pay gap in 2011, and as few as 11 actually published the data. Crowdsourced efforts, where women voluntary report their pay, have also suffered from incomplete data. And even complete data does not illuminate variables such as why women may work in a field that pays less….(More)”.

AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment


Alessandro Mantelero in Computer Law & Security Review: “The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values.

Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research….(More)”.