Is Ethical A.I. Even Possible?


Cade Metz at The New York Times: ” When a news article revealed that Clarifaiwas working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation....

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.

After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests….

In their open letter, the Clarifai employees said they were unsure whether regulation was the answer to the many ethical questions swirling around A.I. technology, arguing that the immediate responsibility rested with the company itself….(More)”.

EU Data Protection Rules and U.S. Implications


In Focus by the Congressional Research Service: “U.S. and European citizens are increasingly concerned about ensuring the protection of personal data, especially online. A string of high-profile data breaches at companies such as Facebook and Google have contributed to heightened public awareness. The European Union’s (EU) new General Data Protection Regulation (GDPR)—which took effect on May 25, 2018—has drawn the attention of U.S. businesses and other stakeholders, prompting debate on U.S. data privacy and protection policies.

Both the United States and the 28-member EU assert that they are committed to upholding individual privacy rights and ensuring the protection of personal data, including electronic data. However, data privacy and protection issues have long been sticking points in U.S.-EU economic and security relations, in part because of differences in U.S. and EU legal regimes and approaches to data privacy.

The GDPR highlights some of those differences and poses challenges for U.S. companies doing business in the EU. The United States does not broadly restrict cross-border data flows and has traditionally regulated privacy at a sectoral level to cover certain types of data. The EU considers the privacy of communications and the protection of personal data to be fundamental rights, which are codified in EU law. Europe’s history with fascist and totalitarian regimes informs the EU’s views on data protection and contributes to the demand for strict data privacy controls. The EU regards current U.S. data protection safeguards as inadequate; this has complicated the conclusion of U.S.-EU information-sharing agreements and raised concerns about U.S.-EU data flows….(More).

The tools of citizen science: An evaluation of map-based crowdsourcing platforms


Paper by Zachary Lamoureux and Victoria Fast: “There seems to be a persistent yet inaccurate sentiment that collecting vast amounts of data via citizen science is virtually free, especially compared to the cost of privatized scientific endeavors (Bonney et al., 2009; Cooper, Hochachka & Dhondt, 2011). However, performing scientific procedures with the assistance of the public is often far more complex than traditional scientific enquiry (Bonter & Cooper, 2012).

Citizen science promotes the participation of the public in scientific endeavors (Hecker et al., 2018). While citizen science is not synonymous with volunteered geographic information (VGI)— broadly defined as the creation of geographic information by citizens (Goodchild, 2007)—it often produces geographic information. Similar to VGI, citizen science projects tend to follow specific protocols to ensure the crowdsourced geographic data serves as an input for (scientific) research (Haklay, 2013). Also similar to VGI, citizen science projects often require software applications and specialized training to facilitate citizen data collection. Notably, citizen science projects are increasingly requiring a webbased participatory mapping platform—i.e., Geoweb (Leszczynski & Wilson, 2013)—to coordinate the proliferation of citizen contributions. ...

In this research, we investigate publicly available commercial and opensource map-based tools that enable citizen science projects. Building on a comprehensive comparative framework, we conduct a systematic evaluation and overview of five map-based crowdsourcing platforms: Ushahidi, Maptionnaire, Survey123 (ArcGIS Online), Open Data Kit, and GIS Cloud. These tools have additional uses that extend beyond the field of citizen science; however, the scope of the investigation was narrowed to focus on aspects most suitable for citizen science endeavors, such as the collection, management, visualization and dissemination of crowdsourced data. It is our intention to provide information on how these publicly available crowdsourcing platforms suit generic geographic citizen science crowdsourcing needs….(More)”.

Invisible Women: Exposing Data Bias in a World Designed for Men


Book by Caroline Criado Perez: “Imagine a world where your phone is too big for your hand, where your doctor prescribes a drug that is wrong for your body, where in a car accident you are 47% more likely to be seriously injured, where every week the countless hours of work you do are not recognised or valued. If any of this sounds familiar, chances are that you’re a woman.

Invisible Women shows us how, in a world largely built for and by men, we are systematically ignoring half the population. It exposes the gender data gap – a gap in our knowledge that is at the root of perpetual, systemic discrimination against women, and that has created a pervasive but invisible bias with a profound effect on women’s lives.

Award-winning campaigner and writer Caroline Criado Perez brings together for the first time an impressive range of case studies, stories and new research from across the world that illustrate the hidden ways in which women are forgotten, and the impact this has on their health and well-being. From government policy and medical research, to technology, workplaces, urban planning and the media, Invisible Womenreveals the biased data that excludes women. In making the case for change, this powerful and provocative book will make you see the world anew….(More)”

Data Trusts: Ethics, Architecture and Governance for Trustworthy Data Stewardship


Web Science Institute Paper by Kieron O’Hara: “In their report on the development of the UK AI industry, Wendy Hall and Jérôme Pesenti
recommend the establishment of data trusts, “proven and trusted frameworks and agreements” that will “ensure exchanges [of data] are secure and mutually beneficial” by promoting trust in the use of data for AI. Hall and Pesenti leave the structure of data trusts open, and the purpose of this paper is to explore the questions of (a) what existing structures can data trusts exploit, and (b) what relationship do data trusts have to
trusts as they are understood in law?

The paper defends the following thesis: A data trust works within the law to provide ethical, architectural and governance support for trustworthy data processing

Data trusts are therefore both constraining and liberating. They constrain: they respect current law, so they cannot render currently illegal actions legal. They are intended to increase trust, and so they will typically act as
further constraints on data processors, adding the constraints of trustworthiness to those of law. Yet they also liberate: if data processors
are perceived as trustworthy, they will get improved access to data.

Most work on data trusts has up to now focused on gaining and supporting the trust of data subjects in data processing. However, all actors involved in AI – data consumers, data providers and data subjects – have trust issues which data trusts need to address.

Furthermore, it is not only personal data that creates trust issues; the same may be true of any dataset whose release might involve an organisation risking competitive advantage. The paper addresses four areas….(More)”.

Harnessing the Power of Open Data for Children and Families


Article by Kathryn L.S. Pettit and Rob Pitingolo: “Child advocacy organizations, such as members of the KIDS COUNT network, have proven the value of using data to advocate for policies and programs to improve the lives of children and families. These organizations use data to educate policymakers and the public about how children are faring in their communities. They understand the importance of high-quality information for policy and decisionmaking. And in the past decade, many state governments have embraced the open data movement. Their data portals promote government transparency and increase data access for a wide range of users inside and outside government.

At the request of the Annie E. Casey Foundation, which funds the KIDS COUNT network, the authors conducted research to explore how these state data efforts could bring greater benefits to local communities. Interviews with child advocates and open data providers confirmed the opportunity for child advocacy organizations and state governments to leverage open data to improve the lives of children and families. But accomplishing this goal will require new practices on both sides.

This brief first describes the current state of practice for child advocates using data and for state governments publishing open data. It then provides suggestions for what it would take from both sides to increase the use of open data to improve the lives of children and families. Child and family advocates will find five action steps in section 2. These steps encourage them to assess their data needs, build relationships with state data managers, and advocate for new data and preservation of existing data.
State agency staff will find five action steps in section 3. These steps describe how staff can engage diverse stakeholders, including agency staff beyond typical “data people” and data users outside government. Although this brief focuses on state-level institutions, local advocates an governments will find these lessons relevant. In fact, many of the lessons and best practices are based on pioneering efforts at the local level….(More)”.

A Review of Citizen Science and Crowdsourcing in Applications of Pluvial Flooding


Jonathan D. Paul in Frontiers in Earth Science: “Pluvial flooding can have devastating effects, both in terms of loss of life and damage. Predicting pluvial floods is difficult and many cities do not have a hydrodynamic model or an early warning system in place. Citizen science and crowdsourcing have the potential for contributing to early warning systems and can also provide data for validating flood forecasting models. Although there are increasing applications of citizen science and crowdsourcing in fluvial hydrology, less is known about activities related to pluvial flooding. Hence the aim of this paper is to review current activities in citizen science and crowdsourcing with respect to applications of pluvial flooding.

Based on a search in Scopus, the papers were first filtered for relevant content and then classified into four main themes. The first two themes were divided into (i) applications relevant during a flood event, which includes automated street flooding detection using crowdsourced photographs and sensors, analysis of social media, and online and mobile applications for flood reporting; and (ii) applications related to post-flood events. The use of citizen science and crowdsourcing for model development and validation is the third theme while the development of integrated systems is theme four. All four main areas of research have the potential to contribute to early warning systems and build community resilience. Moreover, developments in one will benefit others, e.g., further developments in flood reporting applications and automated flood detection systems will yield data useful for model validation….(More)”.

Big Data and Dahl’s Challenge of Democratic Governance


Alex Ingrams in the Review of Policy Research: “Big data applications have been acclaimed as potentially transformative for the public sector. But, despite this acclaim, most theory of big data is narrowly focused around technocratic goals. The conceptual frameworks that situate big data within democratic governance systems recognizing the role of citizens are still missing. This paper explores the democratic governance impacts of big data in three policy areas using Robert Dahl’s dimensions of control and autonomy. Key impacts and potential tensions are highlighted. There is evidence of impacts on both dimensions, but the dimensions conflict as well as align in notable ways and focused policy efforts will be needed to find a balance….(More)”.

Big data needs big governance: best practices from Brain-CODE, the Ontario Brain Institute’s neuroinformatics platform


Shannon C. Lefaivre et al in Frontiers of Genetics: “The Ontario Brain Institute (OBI) has begun to catalyze scientific discovery in the field of neuroscience through its large-scale informatics platform, known as Brain-CODE. The platform supports the capture, storage, federation, sharing and analysis of different data types across several brain disorders. Underlying the platform is a robust and scalable data governance structure which allows for the flexibility to advance scientific understanding, while protecting the privacy of research participants.

Recognizing the value of an open science approach to enabling discovery, the governance structure was designed not only to support collaborative research programs, but also to support open science by making all data open and accessible in the future. OBI’s rigorous approach to data sharing maintains the accessibility of research data for big discoveries without compromising privacy and security. Taking a Privacy by Design approach to both data sharing and development of the platform has allowed OBI to establish some best practices related to large scale data sharing within Canada. The aim of this report is to highlight these best practices and develop a key open resource which may be referenced during the development of similar open science initiatives….(More)”.

Information audit as an important tool in organizational management: A review of literature



Paper by Ayinde Lateef, Funmilola Olubunmi Omotayo: “This article considers information as a strategic asset in the organization just as land, labour and capital. It elaborates how information assets help organizations to meet its organizational objectives and also examine issues that led to the proliferation of information assets; because of the proliferation of data and information, it becomes difficult for organization to make effective use of these information assets to meets its objectives. This leads to management of information assets and management of information risk. These two areas are critical to organization. It was concluded that information audit is the effective tool that could be used to manage the information asset and information risk. Also that information policy should be drawn; information professional should be among those handling information-related issues….(More)”.