Empathy and cooperation go hand in hand


PennToday: “It’s a big part of what makes us human: we cooperate. But humans aren’t saints. Most of us are more likely to help someone we consider good than someone we consider a jerk. 

How we form these moral assessments of others has a lot to do with cultural and social norms, as well as our capacity for empathy, the extent to which we can take on the perspective of another person.

In a new analysis, researchers from the University of Pennsylvania investigate cooperation with an evolutionary approach. Using game-theory-driven models, they show that a capacity for empathy fosters cooperation, according to senior author Joshua Plotkin, an evolutionary biologist. The models also show that the extent to which empathy promotes cooperation depends on a given society’s system for moral evaluation. 

“Having not just the capacity but the willingness to take into account someone else’s perspective when forming moral judgments tends to promote cooperation,” says Plotkin. 

What’s more, the group’s analysis points to a heartening conclusion. All else being equal, empathy tends to spread throughout a population under most scenarios. 

“We asked, ‘can empathy evolve?’” explains Arunas Radzvilavicius, the study’s lead author and a postdoctoral researcher who works with Plotkin. “What if individuals start copying the empathetic way of observing each other’s interactions? And we saw that empathy soared through the population.”

Plotkin and Radzvilavicius coauthored the study, published today in eLife, with Alexander Stewart, an assistant professor at the University of Houston.

Plenty of scientists have probed the question of why individuals cooperate through indirect reciprocity, a scenario in which one person helps another not because of a direct quid pro quo but because they know that person to be “good.” But the Penn group gave the study a nuance that others had not explored. Whereas other studies have assumed that reputations are universally known, Plotkin, Radzvilavicius, and Stewart realized this did not realistically describe human society, where individuals may differ in their opinion of others’ reputations.

“In large, modern societies, people disagree a lot about each other’s moral reputations,” Plotkin says. 

The researchers incorporated this variation in opinions into their models, which imagine someone choosing either to donate or not to donate to a second person based on that individual’s reputation. The researchers found that cooperation was less likely to be sustained when people disagree about each other’s reputations.

That’s when they decided to incorporate empathy, or theory of mind, which, in the context of the study, entails the ability to understand the perspective of another person.

Doing so allowed cooperation to win out over more selfish strategies….(More)”.

Open Justice: Public Entrepreneurs Learn to Use New Technology to Increase the Efficiency, Legitimacy, and Effectiveness of the Judiciary


The GovLab: “Open justice is a growing movement to leverage new technologies – including big data, digital platforms, blockchain and more – to improve legal systems by making the workings of courts easier to understand, scrutinize and improve. Through the use of new technology, open justice innovators are enabling greater efficiency, fairness, accountability and a reduction in corruption in the third branch of government. For example, the open data portal ‘Atviras Teismas’ Lithuania (translated ‘open court’ Lithuania) is a platform for monitoring courts and judges through performance metrics’. This portal serves to make the courts of Lithuania transparent and benefits both courts and citizens by presenting comparative data on the Lithuanian Judiciary.

To promote more Open Justice projects, the GovLab in partnership with the Electoral Tribunal of the Federal Judiciary (TEPJF) of Mexico, launched an historic, first of its kind, online course on Open Justice. Designed primarily for lawyers, judges, and public officials – but also intended to appeal to technologists, and members of the public – the Spanish-language course consists of 10 modules.

Each of the ten modules comprises:

  1. A short video-based lecture
  2. An original Open Justice reader
  3. Associated additional readings
  4. A self-assessment quiz
  5. A demonstration of a platform or tool
  6. An interview with a global practitioner

Among those featured in the interviews are Felipe Moreno of Jusbrasil, Justin Erlich of OpenJustice California, Liam Hayes of Aurecon, UK, Steve Ghiassi of Legaler, Australia, and Sara Castillo of Poder Judicial, Chile….(More)”.

Building Trust in Human Centric Artificial Intelligence


Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: “Artificial intelligence (AI) has the potential to transform our world for the better: it can improve healthcare, reduce energy consumption, make cars safer, and enable farmers to use water and natural resources more efficiently. AI can be used to predict environmental and climate change, improve financial risk management and provides the tools to manufacture, with less waste, products tailored to our needs. AI can also help to detect fraud and cybersecurity threats, and enables law enforcement agencies to fight crime more efficiently.

AI can benefit the whole of society and the economy. It is a strategic technology that is now being developed and used at a rapid pace across the world. Nevertheless, AI also brings with it new challenges for the future of work, and raises legal and ethical questions.

To address these challenges and make the most of the opportunities which AI offers, the Commission published a European strategy in April 2018. The strategy places people at the centre of the development of AI — human-centric AI. It is a three-pronged approach to boost the EU’s technological and industrial capacity and AI uptake across the economy, prepare for socio-economic changes, and ensure an appropriate ethical and legal framework.

To deliver on the AI strategy, the Commission developed together with Member States a coordinated plan on AI, which it presented in December 2018, to create synergies, pool data — the raw material for many AI applications — and increase joint investments. The aim is to foster cross-border cooperation and mobilise all players to increase public and private investments to at least EUR 20 billion annually over the next decade.

The Commission doubled its investments in AI in Horizon 2020 and plans to invest EUR 1 billion annually from Horizon Europe and the Digital Europe Programme, in support notably of common data spaces in health, transport and manufacturing, and large experimentation facilities such as smart hospitals and infrastructures for automated vehicles and a strategic research agenda.

To implement such a common strategic research, innovation and deployment agenda the Commission has intensified its dialogue with all relevant stakeholders from industry, research institutes and public authorities. The new Digital Europe programme will also be crucial in helping to make AI available to small and medium-size enterprises across all Member States through digital innovation hubs, strengthened testing and experimentation facilities, data spaces and training programmes.

Building on its reputation for safe and high-quality products, Europe’s ethical approach to AI strengthens citizens’ trust in the digital development and aims at building a competitive advantage for European AI companies. The purpose of this Communication is to launch a comprehensive piloting phase involving stakeholders on the widest scale in order to test the practical implementation of ethical guidance for AI development and use…(More)”.

The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand


NBER Paper by Daron Acemoglu and Pascual Restrepo: “Artificial Intelligence is set to influence every aspect of our lives, not least the way production is organized. AI, as a technology platform, can automate tasks previously performed by labor or create new tasks and activities in which humans can be productively employed. Recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality and lower productivity growth. The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the “right” kind of AI with better economic and social outcomes….(More)”.

The Automated Administrative State


Paper by Danielle Citron and Ryan Calo: “The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts…(More)”.

Rethink government with AI


Helen Margetts and Cosmina Dorobantu at Nature: “People produce more than 2.5 quintillion bytes of data each day. Businesses are harnessing these riches using artificial intelligence (AI) to add trillions of dollars in value to goods and services each year. Amazon dispatches items it anticipates customers will buy to regional hubs before they are purchased. Thanks to the vast extractive might of Google and Facebook, every bakery and bicycle shop is the beneficiary of personalized targeted advertising.

But governments have been slow to apply AI to hone their policies and services. The reams of data that governments collect about citizens could, in theory, be used to tailor education to the needs of each child or to fit health care to the genetics and lifestyle of each patient. They could help to predict and prevent traffic deaths, street crime or the necessity of taking children into care. Huge costs of floods, disease outbreaks and financial crises could be alleviated using state-of-the-art modelling. All of these services could become cheaper and more effective.

This dream seems rather distant. Governments have long struggled with much simpler technologies. Flagship policies that rely on information technology (IT) regularly flounder. The Affordable Care Act of former US president Barack Obama nearly crumbled in 2013 when HealthCare.gov, the website enabling Americans to enrol in health insurance plans, kept crashing. Universal Credit, the biggest reform to the UK welfare state since the 1940s, is widely regarded as a disaster because of its failure to pay claimants properly. It has also wasted £837 million (US$1.1 billion) on developing one component of its digital system that was swiftly decommissioned. Canada’s Phoenix pay system, introduced in 2016 to overhaul the federal government’s payroll process, has remunerated 62% of employees incorrectly in each fiscal year since its launch. And My Health Record, Australia’s digital health-records system, saw more than 2.5 million people opt out by the end of January this year over privacy, security and efficacy concerns — roughly 1 in 10 of those who were eligible.

Such failures matter. Technological innovation is essential for the state to maintain its position of authority in a data-intensive world. The digital realm is where citizens live and work, shop and play, meet and fight. Prices for goods are increasingly set by software. Work is mediated through online platforms such as Uber and Deliveroo. Voters receive targeted information — and disinformation — through social media.

Thus the core tasks of governments, such as enforcing regulation, setting employment rights and ensuring fair elections require an understanding of data and algorithms. Here we highlight the main priorities, drawn from our experience of working with policymakers at The Alan Turing Institute in London….(More)”.

Innovation Meets Citizen Science


Caroline Nickerson at SciStarter: “Citizen science has been around as long as science, but innovative approaches are opening doors to more and deeper forms of public participation.

Below, our editors spotlight a few projects that feature new approaches, novel research, or low-cost instruments. …

Colony B: Unravel the secrets of microscopic life! Colony B is a mobile gaming app developed at McGill University that enables you to contribute to research on microbes. Collect microbes and grow your colony in a fast-paced puzzle game that advances important scientific research.

AirCasting: AirCasting is an open-source, end-to-end solution for collecting, displaying, and sharing health and environmental data using your smartphone. The platform consists of wearable sensors, including a palm-sized air quality monitor called the AirBeam, that detect and report changes in your environment. (Android only.)

LingoBoingo: Getting computers to understand language requires large amounts of linguistic data and “correct” answers to language tasks (what researchers call “gold standard annotations”). Simply by playing language games online, you can help archive languages and create the linguistic data used by researchers to improve language technologies. These games are in English, French, and a new “multi-lingual” category.

TreeSnap: Help our nation’s trees and protect human health in the process. Invasive diseases and pests threaten the health of America’s forests. With the TreeSnap app, you can record the location and health of particular tree species–those unharmed by diseases that have wiped out other species. Scientists then use the collected information to locate candidates for genetic sequencing and breeding programs. Tag trees you find in your community, on your property, or out in the wild to help scientists understand forest health….(More)”.

This tech tells cities when floods are coming–and what they will destroy


Ben Paynter at FastCompany: “Several years ago, one of the eventual founders of One Concern nearly died in a tragic flood. Today, the company specializes in using artificial intelligence to predict how natural disasters are unfolding in real time on a city-block-level basis, in order to help disaster responders save as many lives as possible….

To fix that, One Concern debuted Flood Concern in late 2018. It creates map-based visualizations of where water surges may hit hardest, up to five days ahead of an impending storm. For cities, that includes not just time-lapse breakdowns of how the water will rise, how fast it could move, and what direction it will be flowing, but also what structures will get swamped or washed away, and how differing mitigation efforts–from levy building to dam releases–will impact each scenario. It’s the winner of Fast Company’s 2019 World Changing Ideas Awards in the AI and Data category.

[Image: One Concern]

So far, Flood Concern has been retroactively tested against events like Hurricane Harvey to show that it could have predicted what areas would be most impacted well ahead of the storm. The company, which was founded in Silicon Valley in 2015, started with one of that region’s pressing threats: earthquakes. It’s since earned contracts with cities like San Francisco, Los Angeles, and Cupertino, as well as private insurance companies….

One Concern’s first offering, dubbed Seismic Concern, takes existing information from satellite images and building permits to figure out what kind of ground structures are built on, and what might happen if they started shaking. If a big one hits, the program can extrapolate from the epicenter to suggest the likeliest places for destruction, and then adjust as more data from things like 911 calls and social media gets factored in….(More)”.


Platform Surveillance


Editorial by David Murakami Wood and Torin Monahan of Special Issue of Surveillance and Society: “This editorial introduces this special responsive issue on “platform surveillance.” We develop the term platform surveillance to account for the manifold and often insidious ways that digital platforms fundamentally transform social practices and relations, recasting them as surveillant exchanges whose coordination must be technologically mediated and therefore made exploitable as data. In the process, digital platforms become dominant social structures in their own right, subordinating other institutions, conjuring or sedimenting social divisions and inequalities, and setting the terms upon which individuals, organizations, and governments interact.

Emergent forms of platform capitalism portend new governmentalities, as they gradually draw existing institutions into alignment or harmonization with the logics of platform surveillance while also engendering subjectivities (e.g., the gig-economy worker) that support those logics. Because surveillance is essential to the operations of digital platforms, because it structures the forms of governance and capital that emerge, the field of surveillance studies is uniquely positioned to investigate and theorize these phenomena….(More)”.

Responsible Data Governance of Neuroscience Big Data


Paper by B. Tyr Fothergill et al: “Current discussions of the ethical aspects of big data are shaped by concerns regarding the social consequences of both the widespread adoption of machine learning and the ways in which biases in data can be replicated and perpetuated. We instead focus here on the ethical issues arising from the use of big data in international neuroscience collaborations.

Neuroscience innovation relies upon neuroinformatics, large-scale data collection and analysis enabled by novel and emergent technologies. Each step of this work involves aspects of ethics, ranging from concerns for adherence to informed consent or animal protection principles and issues of data re-use at the stage of data collection, to data protection and privacy during data processing and analysis, and issues of attribution and intellectual property at the data-sharing and publication stages.

Significant dilemmas and challenges with far-reaching implications are also inherent, including reconciling the ethical imperative for openness and validation with data protection compliance, and considering future innovation trajectories or the potential for misuse of research results. Furthermore, these issues are subject to local interpretations within different ethical cultures applying diverse legal systems emphasising different aspects. Neuroscience big data require a concerted approach to research across boundaries, wherein ethical aspects are integrated within a transparent, dialogical data governance process. We address this by developing the concept of ‘responsible data governance’, applying the principles of Responsible Research and Innovation (RRI) to the challenges presented by governance of neuroscience big data in the Human Brain Project (HBP)….(More)”.