Digital human rights are next frontier for fund groups


Siobhan Riding at the Financial Times: “Politicians publicly grilling technology chiefs such as Facebook’s Mark Zuckerberg is all too familiar for investors. “There isn’t a day that goes by where you don’t see one of the tech companies talking to Congress or being highlighted for some kind of controversy,” says Lauren Compere, director of shareholder engagement at Boston Common Asset Management, a $2.4bn fund group that invests heavily in tech stocks.

Fallout from the Cambridge Analytica scandal that engulfed Facebook was a wake-up call for investors such as Boston Common, underlining the damaging social effects of digital technology if left unchecked. “These are the red flags coming up for us again and again,” says Ms Compere.

Digital human rights are fast becoming the latest front in the debate around fund managers’ ethical investments efforts. Fund managers have come under pressure in recent years to divest from companies that can harm human rights — from gun manufacturers or retailers to operators of private prisons. The focus is now switching to the less tangible but equally serious human rights risks lurking in fund managers’ technology holdings. Attention on technology groups began with concerns around data privacy, but emerging focal points are targeted advertising and how companies deal with online extremism.

Following a terrorist attack in New Zealand this year where the shooter posted video footage of the incident online, investors managing assets of more than NZ$90bn (US$57bn) urged Facebook, Twitter and Alphabet, Google’s parent company, to take more action in dealing with violent or extremist content published on their platforms. The Investor Alliance for Human Rights is currently co-ordinating a global engagement effort with Alphabet over the governance of its artificial intelligence technology, data privacy and online extremism.

Investor engagement on the topic of digital human rights is in its infancy. One roadblock for investors has been the difficulty they face in detecting and measuring what the actual risks are. “Most investors do not have a very good understanding of the implications of all of the issues in the digital space and don’t have sufficient research and tools to properly assess them — and that goes for companies too,” said Ms Compere.

One rare resource available is the Ranking Digital Rights Corporate Accountability Index, established in 2015, which rates tech companies based on a range of metrics. The development of such tools gives investors more information on the risk associated with technological advancements, enabling them to hold companies to account when they identify risks and questionable ethics….(More)”.

Citizen science and the United Nations Sustainable Development Goals


Steffen Fritz et al in Nature: “Traditional data sources are not sufficient for measuring the United Nations Sustainable Development Goals. New and non-traditional sources of data are required. Citizen science is an emerging example of a non-traditional data source that is already making a contribution. In this Perspective, we present a roadmap that outlines how citizen science can be integrated into the formal Sustainable Development Goals reporting mechanisms. Success will require leadership from the United Nations, innovation from National Statistical Offices and focus from the citizen-science community to identify the indicators for which citizen science can make a real contribution….(More)”.

Are Randomized Poverty-Alleviation Experiments Ethical?


Peter Singer et al at Project Syndicate: “Last month, the Nobel Memorial Prize in Economic Sciences was awarded to three pioneers in using randomized controlled trials (RCTs) to fight poverty in low-income countries: Abhijit Banerjee, Esther Duflo, and Michael Kremer. In RCTs, researchers randomly choose a group of people to receive an intervention, and a control group of people who do not, and then compare the outcomes. Medical researchers use this method to test new drugs or surgical techniques, and anti-poverty researchers use it alongside other methods to discover which policies or interventions are most effective. Thanks to the work of Banerjee, Duflo, Kremer, and others, RCTs have become a powerful tool in the fight against poverty.

But the use of RCTs does raise ethical questions, because they require randomly choosing who receives a new drug or aid program, and those in the control group often receive no intervention or one that may be inferior. One could object to this on principle, following Kant’s claim that it is always wrong to use human beings as a means to an end; critics have argued that RCTs “sacrifice the well-being of study participants in order to ‘learn.’”

Rejecting all RCTs on this basis, however, would also rule out the clinical trials on which modern medicine relies to develop new treatments. In RCTs, participants in both the control and treatment groups are told what the study is about, sign up voluntarily, and can drop out at any time. To prevent people from choosing to participate in such trials would be excessively paternalistic, and a violation of their personal freedom.

less extreme version of the criticism argues that while medical RCTs are conducted only if there are genuine doubts about a treatment’s merits, many development RCTs test interventions, such as cash transfers, that are clearly better than nothing. In this case, maybe one should just provide the treatment?

This criticism neglects two considerations. First, it is not always obvious what is better, even for seemingly stark examples like this one. For example, before RCT evidence to the contrary, it was feared that cash transfers lead to conflict and alcoholism.

Second, in many development settings, there are not enough resources to help everyone, creating a natural control group….

A third version of the ethical objection is that participants may actually be harmed by RCTs. For example, cash transfers might cause price inflation and make non-recipients poorer, or make non-recipients envious and unhappy. These effects might even affect people who never consented to be part of a study.

This is perhaps the most serious criticism, but it, too, does not make RCTs unethical in general….(More)”.

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Artificial intelligence: From expert-only to everywhere


Deloitte: “…AI consists of multiple technologies. At its foundation are machine learning and its more complex offspring, deep-learning neural networks. These technologies animate AI applications such as computer vision, natural language processing, and the ability to harness huge troves of data to make accurate predictions and to unearth hidden insights (see sidebar, “The parlance of AI technologies”). The recent excitement around AI stems from advances in machine learning and deep-learning neural networks—and the myriad ways these technologies can help companies improve their operations, develop new offerings, and provide better customer service at a lower cost.

The trouble with AI, however, is that to date, many companies have lacked the expertise and resources to take full advantage of it. Machine learning and deep learning typically require teams of AI experts, access to large data sets, and specialized infrastructure and processing power. Companies that can bring these assets to bear then need to find the right use cases for applying AI, create customized solutions, and scale them throughout the company. All of this requires a level of investment and sophistication that takes time to develop, and is out of reach for many….

These tech giants are using AI to create billion-dollar services and to transform their operations. To develop their AI services, they’re following a familiar playbook: (1) find a solution to an internal challenge or opportunity; (2) perfect the solution at scale within the company; and (3) launch a service that quickly attracts mass adoption. Hence, we see Amazon, Google, Microsoft, and China’s BATs launching AI development platforms and stand-alone applications to the wider market based on their own experience using them.

Joining them are big enterprise software companies that are integrating AI capabilities into cloud-based enterprise software and bringing them to the mass market. Salesforce, for instance, integrated its AI-enabled business intelligence tool, Einstein, into its CRM software in September 2016; the company claims to deliver 1 billion predictions per day to users. SAP integrated AI into its cloud-based ERP system, S4/HANA, to support specific business processes such as sales, finance, procurement, and the supply chain. S4/HANA has around 8,000 enterprise users, and SAP is driving its adoption by announcing that the company will not support legacy SAP ERP systems past 2025.

A host of startups is also sprinting into this market with cloud-based development tools and applications. These startups include at least six AI “unicorns,” two of which are based in China. Some of these companies target a specific industry or use case. For example, Crowdstrike, a US-based AI unicorn, focuses on cybersecurity, while Benevolent.ai uses AI to improve drug discovery.

The upshot is that these innovators are making it easier for more companies to benefit from AI technology even if they lack top technical talent, access to huge data sets, and their own massive computing power. Through the cloud, they can access services that address these shortfalls—without having to make big upfront investments. In short, the cloud is democratizing access to AI by giving companies the ability to use it now….(More)”.

Governing Missions in the European Union


Report by Marianna Mazucatto: “This report, Governing Missions, looks at the ‘how’: how to implement and govern a mission-oriented process so that it unleashes the full creativity and ambition potential of R&I policy-making; and how it crowds-in investments from across Europe in the process. The focus is on 3 key questions:

  • How to engage citizens in codesigning, co-creating, co-implementing
    and co-assessing missions?
  • What are the public sector capabilities and instruments needed to foster a dynamic innovation ecosystem, including the ability of civil servants to welcome experimentation and help governments work outside silos?
  • How can mission-oriented finance and funding leverage and crowd-in other forms of finance, galvanising innovation across actors (public, private and third sector), different manufacturing and service sectors, and across national and transnational levels?…(More)”.

Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices


New report by Stefaan Verhulst, Andrew Young, Michelle Winowatan. and Andrew J. Zahuranec: “To address the challenges of our times, we need both new solutions and new ways to develop those solutions. The responsible use of data will be key toward that end. Since pioneering the concept of “data collaboratives” in 2015, The GovLab has studied and experimented with innovative ways to leverage private-sector data to tackle various societal challenges, such as urban mobility, public health, and climate change.

While we have seen an uptake in normative discussions on how data should be shared, little analysis exists of the actual practice. This paper seeks to address that gap and seeks to answer the following question: What are the variables and models that determine functional access to private sector data for public good? In Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices, we describe the emerging universe of data collaboratives and develop a typology of six practice areas. Our goal is to provide insight into current applications to accelerate the creation of new data collaboratives. The report outlines dozens of examples, as well as a set of recommendations to enable more systematic, sustainable, and responsible data collaboration….(More)”

Beyond the Valley


Book by Ramesh Srinivasan: “How to repair the disconnect between designers and users, producers and consumers, and tech elites and the rest of us: toward a more democratic internet.

In this provocative book, Ramesh Srinivasan describes the internet as both an enabler of frictionless efficiency and a dirty tangle of politics, economics, and other inefficient, inharmonious human activities. We may love the immediacy of Google search results, the convenience of buying from Amazon, and the elegance and power of our Apple devices, but it’s a one-way, top-down process. We’re not asked for our input, or our opinions—only for our data. The internet is brought to us by wealthy technologists in Silicon Valley and China. It’s time, Srinivasan argues, that we think in terms beyond the Valley.

Srinivasan focuses on the disconnection he sees between designers and users, producers and consumers, and tech elites and the rest of us. The recent Cambridge Analytica and Russian misinformation scandals exemplify the imbalance of a digital world that puts profits before inclusivity and democracy. In search of a more democratic internet, Srinivasan takes us to the mountains of Oaxaca, East and West Africa, China, Scandinavia, North America, and elsewhere, visiting the “design labs” of rural, low-income, and indigenous people around the world. He talks to a range of high-profile public figures—including Elizabeth Warren, David Axelrod, Eric Holder, Noam Chomsky, Lawrence Lessig, and the founders of Reddit, as well as community organizers, labor leaders, and human rights activists. To make a better internet, Srinivasan says, we need a new ethic of diversity, openness, and inclusivity, empowering those now excluded from decisions about how technologies are designed, who profits from them, and who are surveilled and exploited by them….(More)”

Comparative Constitution Making


Book edited by Hanna Lerner and David Landau: “In a seminal article more than two decades ago, Jon Elster lamented that despite the large volume of scholarship in related fields, such as comparative constitutional law and constitutional design, there was a severe dearth of work on the process and context of constitution making. Happily, his point no longer holds. Recent years have witnessed a near-explosion of high-quality work on constitution-making processes, across a range of fields including law, political science, and history. This volume attempts to synthesize and expand upon this literature. It offers a number of different perspectives and methodologies aimed at understanding the contexts in which constitution making takes place, its motivations, the theories and processes that guide it, and its effects. The goal of the contributors is not simply to explain the existing state of the field, but also to provide new research on these key questions.

Our aims in this introduction are relatively modest. First, we seek to set up some of the major questions treated by recent research in order to explain how the chapters in this volume contribute to them. We do not aim to give a complete state of the field, but we do lay out what we see as several of the biggest challenges and questions posed by recent scholarship. …(More)”.

The Next Step for Human-Centered Design in Global Public Health


Tracy Johnson, Jaspal S. Sandhu & Nikki Tyler at SSIR : “How do we select the right design partner?” “Where can I find evidence that design really works?” “Can design have any impact beyond products?” These are real questions that we’ve been asked by our public health colleagues who have been exposed to human-centered design. This deeper curiosity indicates a shift in the conversation around human-centered design, compared with common perceptions as recently as five years ago.

The past decade has seen a rapid increase in organizations that use human-centered design for innovation and improvement in health care. However, there have been challenges in determining how to best integrate design into current ways of working. Unfortunately, these challenges have been met with an all-or-nothing response.

In reality, anyone thinking of applying design concepts must first decide how deeply they want design to be integrated into a project. The DesignforHealth community—launched by the Bill & Melinda Gates Foundation and Center for Innovation and Impact at USAID—defines three types of design integration: spark, ingredient, or end-to-end.

As a spark, design can be the catalyst for teams to work creatively and unlock innovation.

Design can be an ingredient that helps improve an existing product. Using design end-to-end in the development process can address a complex concept such as social vulnerability.

As the field of design in health matures, the next phase will require support for “design consumers.” These are non-designers who take part in a design approach, whether as an inspiring spark, a key ingredient in an established process, or an end-to-end approach.

Here are three important considerations that will help design consumers make the critical decisions that are needed before embarking on their next design journey….(More)”.