Paper by Peter Singer & Yip Fai Tse: “The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals…(More)”.
Sustaining Open Data as a Digital Common — Design principles for Common Pool Resources applied to Open Data Ecosystems
Paper by Johan Linåker, and Per Runeson: “Digital commons is an emerging phenomenon and of increasing importance, as we enter a digital society. Open data is one example that makes up a pivotal input and foundation for many of today’s digital services and applications. Ensuring sustainable provisioning and maintenance of the data, therefore, becomes even more important.
We aim to investigate how such provisioning and maintenance can be collaboratively performed in the community surrounding a common. Specifically, we look at Open Data Ecosystems (ODEs), a type of community of actors, openly sharing and evolving data on a technological platform.
We use Elinor Ostrom’s design principles for Common Pool Resources as a lens to systematically analyze the governance of earlier reported cases of ODEs using a theory-oriented software engineering framework.
We find that, while natural commons must regulate consumption, digital commons such as open data maintained by an ODE must stimulate both use and data provisioning. Governance needs to enable such stimulus while also ensuring that the collective action can still be coordinated and managed within the frame of available maintenance resources of a community. Subtractability is, in this sense, a concern regarding the resources required to maintain the quality and value of the data, rather than the availability of data. Further, we derive empirically-based recommended practices for ODEs based on the design principles by Ostrom for how to design a governance structure in a way that enables a sustainable and collaborative provisioning and maintenance of the data.
ODEs are expected to play a role in data provisioning which democratize the digital society and enables innovation from smaller commercial actors. Our empirically based guidelines intend to support this development…(More).
The New Moral Mathematics
Book Review by Kieran Setiya: “Space is big,” wrote Douglas Adams in The Hitchhiker’s Guide to the Galaxy (1979). “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”
What we do now affects future people in dramatic ways—above all, whether they will exist at all.
Time is big, too—even if we just think on the timescale of a species. We’ve been around for approximately 300,000 years. There are now about 8 billion of us, roughly 15 percent of all humans who have ever lived. You may think that’s a lot, but it’s just peanuts to the future. If we survive for another million years—the longevity of a typical mammalian species—at even a tenth of our current population, there will be 8 trillion more of us. We’ll be outnumbered by future people on the scale of a thousand to one.
What we do now affects those future people in dramatic ways: whether they will exist at all and in what numbers; what values they embrace; what sort of planet they inherit; what sorts of lives they lead. It’s as if we’re trapped on a tiny island while our actions determine the habitability of a vast continent and the life prospects of the many who may, or may not, inhabit it. What an awful responsibility.
This is the perspective of the “longtermist,” for whom the history of human life so far stands to the future of humanity as a trip to the chemist’s stands to a mission to Mars.
Oxford philosophers William MacAskill and Toby Ord, both affiliated with the university’s Future of Humanity Institute, coined the word “longtermism” five years ago. Their outlook draws on utilitarian thinking about morality. According to utilitarianism—a moral theory developed by Jeremy Bentham and John Stuart Mill in the nineteenth century—we are morally required to maximize expected aggregate well-being, adding points for every moment of happiness, subtracting points for suffering, and discounting for probability. When you do this, you find that tiny chances of extinction swamp the moral mathematics. If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction—a one in a million chance of saving at least 8 trillion lives—you should do the latter, allowing a million people to die.
Now, as many have noted since its origin, utilitarianism is a radically counterintuitive moral view. It tells us that we cannot give more weight to our own interests or the interests of those we love than the interests of perfect strangers. We must sacrifice everything for the greater good. Worse, it tells us that we should do so by any effective means: if we can shave 0.0001 percent off the probability of human extinction by killing a million people, we should—so long as there are no other adverse effects.
But even if you think we are allowed to prioritize ourselves and those we love, and not allowed to violate the rights of some in order to help others, shouldn’t you still care about the fate of strangers, even those who do not yet exist? The moral mathematics of aggregate well-being may not be the whole of ethics, but isn’t it a vital part? It belongs to the domain of morality we call “altruism” or “charity.” When we ask what we should do to benefit others, we can’t ignore the disquieting fact that the others who occupy the future may vastly outnumber those who occupy the present, and that their very existence depends on us.
From this point of view, it’s an urgent question how what we do today will affect the further future—urgent especially when it comes to what Nick Bostrom, the philosopher who directs the Future of Humanity Institute, calls the “existential risk” of human extinction. This is the question MacAskill takes up in his new book, What We Owe the Future, a densely researched but surprisingly light read that ranges from omnicidal pandemics to our new AI overlords without ever becoming bleak…(More)”.
Localising AI for crisis response
Report by Aleks Berditchevskaia and Kathy Peach, Isabel Stewart: “Putting power back in the hands of frontline humanitarians and local communities.
This report documents the results of a year-long project to design and evaluate new proof-of-concept Collective Crisis Intelligence tools. These are tools that combine data from crisis-affected communities with the processing power of AI to improve humanitarian action.
The two collective crisis intelligence tool prototypes developed were:
- NFRI-Predict: a tool that predicts which non-food aid items (NFRI) are most needed by different types of households in different regions of Nepal after a crisis.
- Report and Respond: a French language SMS-based tool that allows Red Cross volunteers in Cameroon to check the accuracy of COVID-19 rumours or misinformation they hear from the community while they’re in the field, and receive real-time guidance on appropriate responses.
Both tools were developed using Nesta’s Participatory AI methods, which aimed to address some of the risks associated with humanitarian AI by involving local communities in the design, development and evaluation of the new tools.
The project was a partnership between Nesta’s Centre for Collective Intelligence Design (CCID) and Data Analytics Practice (DAP), the Nepal Red Cross and Cameroon Red Cross, IFRC Solferino Academy, and Open Lab Newcastle University, and it was funded by the UK Humanitarian Innovation Hub.
We found that collective crisis intelligence:
- has the potential to make local humanitarian action more timely and appropriate to local needs.
- can transform locally-generated data to drive new forms of (anticipatory) action.
We found that participatory AI:
- can overcome several critiques and limitations of AI – as well as helping to improve model performance.
- helps to surface tensions between the assumptions and standards set by AI gatekeepers versus the pragmatic reality of implementation.
- creates opportunities for building and sharing new capabilities among frontline staff and data scientists.
We also validated that collective crisis intelligence and participatory AI can help increase trust in AI tools, but more research is needed to untangle the factors that were responsible…(More)”.
Protecting Children in Cyberconflicts
Paper by Eleonore Pauwels: “Just as digital technologies have transformed myriad aspects of daily life, they are now transforming war, politics and the social fabric.
This rapid analysis examines the ways in which cyberconflict adversely affects children and offers actions that could strengthen safeguards to protect them.
Cyberconflict can impact children directly or indirectly. Harms range from direct targeting for influence and recruitment into armed forces and armed groups, to personal data manipulation and theft, to cyber attacks on infrastructure across sectors critical to child well-being such as education and health facilities.
Many experts believe that the combination of existing international humanitarian law, international criminal law, human rights law, and child rights law is adequate to address the emerging issues posed by cyberconflict. Nevertheless, several key challenges persist. Attribution of cyber attacks to specific actors and ensuring accountability has proven challenging, particularly in the so-called grey zone between war and peace.
There is an urgent need to clarify how child rights apply in the digital space and for Member States to place these rights at the centre of regulatory frameworks and legislation on new technologies…(More)”.
Unsustainable Alarmism
Essay by Taylor Dotson: “Covid is far from the only global challenge we see depicted as a cataclysm in the making. In 1968, Paul Ehrlich predicted impending famine and social collapse driven by overpopulation. He compared the threat to a ticking bomb — the “population bomb.” And the claim that only a few years remain to prevent climate doom has become a familiar refrain. The recent film Don’t Look Up, about a comet barreling toward Earth, is obviously meant as an allegory for climate catastrophe.
But catastrophism fails to capture the complexities of problems that play out over a long time scale, like Covid and climate change. In a tornado or a flood, which are not only undeniably serious but also require immediate action to prevent destruction, people drop political disputes to do what is necessary to save lives. They bring their loved ones to higher ground. They stack sandbags. They gather in tornado shelters. They evacuate. Covid began as a flood in early 2020, but once a danger becomes long and grinding, catastrophism loses its purchase, and more measured public thinking is required.
Even if the extension of catastrophic rhetoric to longer-term and more complex problems is well-intentioned, it unavoidably implies that something is morally or mentally wrong with the people who fail to take heed. It makes those who are not already horrified, who do not treat the crisis as an undeniable, act-now-or-never calamity, harder to comprehend: What idiot wouldn’t do everything possible to avert catastrophe? This kind of thinking is why global challenges are no longer multifaceted dilemmas to negotiate together; they have become conflicts between those who recognize the self-evident truth and those who have taken flight from reality….(More)”.
Towards Human-Centric Algorithmic Governance
Blog by Zeynep Engin: “It is no longer news to say that the capabilities afforded by Data Science, AI and their associated technologies (such as Digital Twins, Smart Cities, Ledger Systems and other platforms) are poised to revolutionise governance, radically transforming the way democratic processes work, citizen services are provided, and justice is delivered. Emerging applications range from the way election campaigns are run and how crises at population level are managed (e.g. pandemics) to everyday operations like simple parking enforcement and traffic management, and to decisions at critical individual junctures, such as hiring or sentencing decisions. What it means to be a ‘human’ is also a hot topic for both scholarly and everyday discussions, since our societal interactions and values are also shifting fast in an increasingly digital and data-driven world.
As a millennial who grew up in a ‘developing’ economy in the ’90s and later established a cross-sector career in a ‘developed’ economy in the fields of data for policy and algorithmic governance, I believe I can credibly claim a pertinent, hands-on experience of the transformation from a fully analogue world into a largely digital one. I started off trying hard to find sufficient printed information to refer to in my term papers at secondary school, gradually adapting to trying hard to extract useful information amongst practically unlimited resources available online today. The world has become a lot more connected: communities are formed online, goods and services customised to individual tastes and preferences, work and education are increasingly hybrid, reducing dependency on physical environment, geography and time zones. Despite all these developments in nearly every aspect of our lives, one thing that has persisted in the face of this change is the nature of collective decision-making, particularly at the civic/governmental level. It still comprises the same election cycles with more or less similar political incentives and working practices, and the same type of politicians, bureaucracies, hierarchies and networks making and executing important (and often suboptimal) decisions on behalf of the public. Unelected private sector stakeholders in the meantime are quick to fill the growing gap — they increasingly make policies that affect large populations and define the public discourse, to primarily maximise their profit behind their IP protection walls…(More)”.
Landsat turns 50: How satellites revolutionized the way we see – and protect – the natural world
Article by Stacy Morford: “Fifty years ago, U.S. scientists launched a satellite that dramatically changed how we see the world.
It captured images of Earth’s surface in minute detail, showing how wildfires burned landscapes, how farms erased forests, and many other ways humans were changing the face of the planet.
The first satellite in the Landsat series launched on July 23, 1972. Eight others followed, providing the same views so changes could be tracked over time, but with increasingly powerful instruments. Landsat 8 and Landsat 9 are orbiting the planet today, and NASA and the U.S. Geological Survey are planning a new Landsat mission.
The images and data from these satellites are used to track deforestation and changing landscapes around the world, locate urban heat islands, and understand the impact of new river dams, among many other projects. Often, the results help communities respond to risks that may not be obvious from the ground.
Here are three examples of Landsat in action, from The Conversation’s archive.
Tracking changes in the Amazon
When work began on the Belo Monte Dam project in the Brazilian Amazon in 2015, Indigenous tribes living along the Big Bend of the Xingu River started noticing changes in the river’s flow. The water they relied on for food and transportation was disappearing.
Upstream, a new channel would eventually divert as much as 80% of the water to the hydroelectric dam, bypassing the bend.
The consortium that runs the dam argued that there was no scientific proof that the change in water flow harmed fish.
But there is clear proof of the Belo Monte Dam project’s impact – from above, write Pritam Das, Faisal Hossain, Hörður Helgason and Shahzaib Khan at the University of Washington. Using satellite data from the Landsat program, the team showed how the dam dramatically altered the hydrology of the river…
It’s hot in the city – and even hotter in some neighborhoods
Landsat’s instruments can also measure surface temperatures, allowing scientists to map heat risk street by street within cities as global temperatures rise.
“Cities are generally hotter than surrounding rural areas, but even within cities, some residential neighborhoods get dangerously warmer than others just a few miles away,” writes Daniel P. Johnson, who uses satellites to study the urban heat island effect at Indiana University.
Neighborhoods with more pavement and buildings and fewer trees can be 10 degrees Fahrenheit (5.5 C) or more warmer than leafier neighborhoods, Johnson writes. He found that the hottest neighborhoods tend to be low-income, have majority Black or Hispanic residents and had been subjected to redlining, the discriminatory practice once used to deny loans in racial and ethnic minority communities…(More)”.
Digital Literacy Doesn’t Stop the Spread of Misinformation
Article by David Rand, and Nathaniel Sirlin: “There has been tremendous concern recently over misinformation on social media. It was a pervasive topic during the 2020 U.S. presidential election, continues to be an issue during the COVID-19 pandemic and plays an important part in Russian propaganda efforts in the war on Ukraine. This concern is plenty justified, as the consequences of believing false information are arguably shaping the future of nations and greatly affecting our individual and collective health.
One popular theory about why some people fall for misinformation they encounter online is that they lack digital literacy skills, a nebulous term that describes how a person navigates digital spaces. Someone lacking digital literacy skills, the thinking goes, may be more susceptible to believing—and sharing—false information. As a result, less digitally literate people may play a significant role in the spread of misinformation.
This argument makes intuitive sense. Yet very little research has actually investigated the link between digital literacy and susceptibility to believe false information. There’s even less understanding of the potential link between digital literacy and what people share on social media. As researchers who study the psychology of online misinformation, we wanted to explore these potential associations….
When we looked at the connection between digital literacy and the willingness to share false information with others through social media, however, the results were different. People who were more digitally literate were just as likely to say they’d share false articles as people who lacked digital literacy. Like the first finding, the (lack of) connection between digital literacy and sharing false news was not affected by political party affiliation or whether the topic was politics or the pandemic…(More)”
Measuring sustainable tourism with online platform data
Paper by Felix J. Hoffmann, Fabian Braesemann & Timm Teubner: “Sustainability in tourism is a topic of global relevance, finding multiple mentions in the United Nations Sustainable Development Goals. The complex task of balancing tourism’s economic, environmental, and social effects requires detailed and up-to-date data. This paper investigates whether online platform data can be employed as an alternative data source in sustainable tourism statistics. Using a web-scraped dataset from a large online tourism platform, a sustainability label for accommodations can be predicted reasonably well with machine learning techniques. The algorithmic prediction of accommodations’ sustainability using online data can provide a cost-effective and accurate measure that allows to track developments of tourism sustainability across the globe with high spatial and temporal granularity…(More)”.