AFD Research Paper: “The use of frontier technologies in the field of sustainability is likely to accompany its visibility, and the quality of information available to decision makers. This paper explores the possibility of using artificial intelligence to analyze Public Development Banks’ annual reports…(More)”.
How Confucianism could put fear about Artificial Intelligence to bed
Article by Tom Cassauwers: “Western culture has had a long history of individualism, warlike use of technology, Christian apocalyptic thinking and a strong binary between body and soul. These elements might explain the West’s obsession with the technological apocalypse and its opposite: techno-utopianism. In Asia, it’s now common to explain China’s dramatic rise as a leader in AI and robotics as a consequence of state support from the world’s largest economy. But what if — in addition to the massive state investment — China and other Asian nations have another advantage, in the form of Eastern philosophies?
There’s a growing view among independent researchers and philosophers that Confucianism and Buddhism could offer healthy alternative perspectives on the future of technology. And with AI and robots rapidly increasing in importance across industries, it’s time for the West to turn to the East for answers…
So what would a non-Western way of thinking about tech look like? First, there might be a different interpretation of personhood. Both Confucianism and Buddhism potentially open up the way for nonhumans to reach the status of humans. In Confucianism, the state of reaching personhood “is not a given. You need to work to achieve it,” says Wong. The person’s attitude toward certain ethical virtues determines whether or not they reach the status of a human. That also means that “we can attribute personhood to nonhuman things like robots when they play ethically relevant roles and duties as humans,” Wong adds.
Buddhism offers a similar argument, where robots can hypothetically achieve a state of enlightenment, which is present everywhere, not only in humans — an argument made as early as the 1970s by Japanese roboticist Masahiro Mori. It may not be a coincidence that robots enjoy some of their highest social acceptance in Japan, with its Buddhist heritage. “Westerners are generally reluctant about the nature of robotics and AI, considering only humans as true beings, while Easterners more often consider devices as similar to humans,” says Jordi Vallverdú, a professor of philosophy at the Autonomous University of Barcelona….(More)”
Antitrust, Regulation, and User Union in the Era of Digital Platforms and Big Data
Paper by Lin William Cong and Simon Mayer: “We model platform competition with endogenous data generation, collection, and sharing, thereby providing a unifying framework to evaluate data-related regulation and antitrust policies. Data are jointly produced from users’ economic activities and platforms’ investments in data infrastructure. Data improves service quality, causing a feedback loop that tends to concentrate market power. Dispersed users do not internalize the impact of their data contribution on (i) service quality for other users, (ii) market concentration, and (iii) platforms’ incentives to invest in data infrastructure, causing inefficient over- or under-collection of data. Data sharing proposals, user privacy protections, platform commitments, and markets for data cannot fully address these inefficiencies. We propose and analyze user union, which represents and coordinates users, as an effective solution for antitrust and consumer protection in the digital era…(More)”.
Wicked Problems Might Inspire Greater Data Sharing
Paper by Susan Ariel Aaronson: “In 2021, the United Nations Development Program issued a plea in their 2021 Digital Economy Report. “ Global data-sharing can help address major global development challenges such as poverty, health, hunger and climate change. …Without global cooperation on data and information, research to develop the vaccine and actions to tackle the impact of the pandemic would have been a much more difficult task. Thus, in the same way as some data can be public goods, there is a case for some data to be considered as global public goods, which need to be addressed and provided through global governance.” (UNDP: 2021, 178). Global public goods are goods and services with benefits and costs that potentially extend to all countries, people, and generations. Global data sharing can also help solve what scholars call wicked problems—problems so complex that they require innovative, cost effective and global mitigating strategies. Wicked problems are problems that no one knows how to solve without
creating further problems. Hence, policymakers must find ways to encourage greater data sharing among entities that hold large troves of various types of data, while protecting that data from theft, manipulation etc. Many factors impede global data sharing for public good purposes; this analysis focuses on two.
First, policymakers generally don’t think about data as a global public good; they view data as a commercial asset that they should nurture and control. While they may understand that data can serve the public interest, they are more concerned with using data to serve their country’s economic interest. Secondly, many leaders of civil society and business see the data they have collected as proprietary data. So far many leaders of private entities with troves of data are not convinced that their organization will benefit from such sharing. At the same time, companies voluntarily share some data for social good purposes.
However, data cannot meet its public good purpose if data is not shared among societal entities. Moreover, if data as a sovereign asset, policymakers are unlikely to encourage data sharing across borders oriented towards addressing shared problems. Consequently, society will be less able to use data as both a commercial asset and as a resource to enhance human welfare. As the Bennet Institute and ODI have argued, “value comes from data being brought together, and that requires organizations to let others use the data they hold.” But that also means the entities that collected the data may not accrue all of the benefits from that data (Bennett Institute and ODI: 2020a: 4). In short, private entities are not sufficiently incentivized to share data in the global public good…(More)”.
Cross-border Data Flows: Taking Stock of Key Policies and Initiatives
OECD Report: “As data become an important resource for the global economy, it is important to strengthen trust to facilitate data sharing domestically and across borders. Significant momentum for related policies in the G7, and G20, has gone hand in hand with a wide range of – often complementary – national and international initiatives and the development of technological and organisational measures. Advancing a common understanding and dialogue among G7 countries and beyond is crucial to support coordinated and coherent progress in policy and regulatory approaches that leverage the full potential of data for global economic and social prosperity. This report takes stock of key policies and initiatives on cross-border data flows to inform and support G7 countries’ engagement on this policy agenda…(More)”.
Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”
Paper by Eleanor Drage & Kerry Mackereth: “In this paper, we analyze two key claims offered by recruitment AI companies in relation to the development and deployment of AI-powered HR tools: (1) recruitment AI can objectively assess candidates by removing gender and race from their systems, and (2) this removal of gender and race will make recruitment fairer, help customers attain their DEI goals, and lay the foundations for a truly meritocratic culture to thrive within an organization. We argue that these claims are misleading for four reasons: First, attempts to “strip” gender and race from AI systems often misunderstand what gender and race are, casting them as isolatable attributes rather than broader systems of power. Second, the attempted outsourcing of “diversity work” to AI-powered hiring tools may unintentionally entrench cultures of inequality and discrimination by failing to address the systemic problems within organizations. Third, AI hiring tools’ supposedly neutral assessment of candidates’ traits belie the power relationship between the observer and the observed. Specifically, the racialized history of character analysis and its associated processes of classification and categorization play into longer histories of taxonomical sorting and reflect the current demands and desires of the job market, even when not explicitly conducted along the lines of gender and race. Fourth, recruitment AI tools help produce the “ideal candidate” that they supposedly identify through by constructing associations between words and people’s bodies. From these four conclusions outlined above, we offer three key recommendations to AI HR firms, their customers, and policy makers going forward…(More)”.
Global healthcare fairness: We should be sharing more, not less, data
Paper by Kenneth P. Seastedt et al: “The availability of large, deidentified health datasets has enabled significant innovation in using machine learning (ML) to better understand patients and their diseases. However, questions remain regarding the true privacy of this data, patient control over their data, and how we regulate data sharing in a way that does not encumber progress or further potentiate biases for underrepresented populations. After reviewing the literature on potential reidentifications of patients in publicly available datasets, we argue that the cost—measured in terms of access to future medical innovations and clinical software—of slowing ML progress is too great to limit sharing data through large publicly available databases for concerns of imperfect data anonymization. This cost is especially great for developing countries where the barriers preventing inclusion in such databases will continue to rise, further excluding these populations and increasing existing biases that favor high-income countries. Preventing artificial intelligence’s progress towards precision medicine and sliding back to clinical practice dogma may pose a larger threat than concerns of potential patient reidentification within publicly available datasets. While the risk to patient privacy should be minimized, we believe this risk will never be zero, and society has to determine an acceptable risk threshold below which data sharing can occur—for the benefit of a global medical knowledge system….(More)”.
Investment Case: Multiplying Progress Through Data Ecosystems
Report by Dalberg: “Data and data ecosystems enable decision makers to improve lives and livelihoods by better understanding the world around them and acting in more effective and targeted ways. In a time of growing crises and shrinking budgets, it is imperative that every dollar is spent in the most efficient and equitable way. Data ecosystems provide decision makers with the information needed to assess and predict challenges, identify and customize solutions, and monitor and evaluate real-time progress. Together, this enables decisions that are more collaborative, effective, efficient, equitable, timely, and transparent. And this is only getting easier—ongoing advances in our ability to harness and apply data are creating opportunities to better target resources and create even more transformative impact…(More)”.
Eliminate data asymmetries to democratize data use
Article by Rahul Matthan: “Anyone who possesses a large enough store of data can reasonably expect to glean powerful insights from it. These insights are more often than not used to enhance advertising revenues or ensure greater customer stickiness. In other instances, they’ve been subverted to alter our political preferences and manipulate us into taking decisions we otherwise may not have.
The ability to generate insights places those who have access to these data sets at a distinct advantage over those whose data is contained within them. It allows the former to benefit from the data in ways that the latter may not even have thought possible when they consented to provide it. Given how easily these insights can be used to harm those to whom it pertains, there is a need to mitigate the effects of this data asymmetry.
Privacy law attempts to do this by providing data principals with tools they can use to exert control over their personal data. It requires data collectors to obtain informed consent from data principals before collecting their data and forbids them from using it for any purpose other than that which has been previously notified. This is why, even if that consent has been obtained, data fiduciaries cannot collect more data than is absolutely necessary to achieve the stated purpose and are only allowed to retain that data for as long as is necessary to fulfil the stated purpose.
In India, we’ve gone one step further and built techno-legal solutions to help reduce this data asymmetry. The Data Empowerment and Protection Architecture (DEPA) framework makes it possible to extract data from the silos in which they reside and transfer it on the instructions of the data principal to other entities, which can then use it to provide other services to the data principal. This data micro-portability dilutes the historical advantage that incumbents enjoy on account of collecting data over the entire duration of their customer engagement. It eliminates data asymmetries by establishing the infrastructure that creates a competitive market for data-based services, allowing data principals to choose from a range of options as to how their data could be used for their benefit by service providers.
This, however, is not the only type of asymmetry we have to deal with in this age of big data. In a recent article, Stefaan Verhulst of GovLab at New York University pointed out that it is no longer enough to possess large stores of data—you need to know how to effectively extract value from it. Many businesses might have vast stores of data that they have accumulated over the years they have been in operation, but very few of them are able to effectively extract useful signals from that noisy data.
Without the know-how to translate data into actionable information, merely owning a large data set is of little value.
Unlike data asymmetries, which can be mitigated by making data more widely available, information asymmetries can only be addressed by radically democratizing the techniques and know-how that are necessary for extracting value from data. This know-how is largely proprietary and hard to access even in a fully competitive market. What’s more, in many instances, the computation power required far exceeds the capacity of entities for whom data analysis is not the main purpose of their business…(More)”.
Data and displacement: Ethical and practical issues in data-driven humanitarian assistance for IDPs
Blog by Vicki Squire: “Ten years since the so-called “data revolution” (Pearn et al, 2022), the rise of “innovation” and the proliferation of “data solutions” has rendered the assessment of changing data practices within the humanitarian sector ever more urgent. New data acquisition modalities have provoked a range of controversies across multiple contexts and sites (e.g. Human Rights Watch, 2021, 2022a, 2022b). Moreover, a range of concerns have been raised about data sharing (e.g. Fast, 2022) and the inequities embedded within humanitarian data (e.g. Data Values, 2022).
With this in mind, the Data and Displacement project set out to explore the practical and ethical implications of data-driven humanitarian assistance in two contexts characterised by high levels of internal displacement: north-eastern Nigeria and South Sudan. Our interdisciplinary research team includes academics from each of the regions under analysis, as well as practitioners from the International Organization for Migration. From the start, the research was designed to centre the lived experiences of Internally Displaced Persons (IDPs), while also shedding light on the production and use of humanitarian data from multiple perspectives.
We conducted primary research during 2021-2022. Our research combines dataset analysis and visualisation techniques with a thematic analysis of 174 semi-structured qualitative interviews. In total we interviewed 182 people: 42 international data experts, donors, and humanitarian practitioners from a range of governmental and non-governmental organisations; 40 stakeholders and practitioners working with IDPs across north-eastern Nigeria and South Sudan (20 in each region); and 100 IDPs in camp-like settings (50 in each region). Our findings point to a disconnect between international humanitarian standards and practices on the ground, the need to revisit existing ethical guidelines such informed consent, and the importance of investing in data literacies…(More)”.