To Fix Tech, Democracy Needs to Grow Up


Article by Divya Siddarth: “There isn’t much we can agree on these days. But two sweeping statements that might garner broad support are “We need to fix technology” and “We need to fix democracy.”

There is growing recognition that rapid technology development is producing society-scale risks: state and private surveillance, widespread labor automation, ascending monopoly and oligopoly power, stagnant productivity growth, algorithmic discrimination, and the catastrophic risks posed by advances in fields like AI and biotechnology. Less often discussed, but in my view no less important, is the loss of potential advances that lack short-term or market-legible benefits. These include vaccine development for emerging diseases and open source platforms for basic digital affordances like identity and communication.

At the same time, as democracies falter in the face of complex global challenges, citizens (and increasingly, elected leaders) around the world are losing trust in democratic processes and are being swayed by autocratic alternatives. Nation-state democracies are, to varying degrees, beset by gridlock and hyper-partisanship, little accountability to the popular will, inefficiency, flagging state capacity, inability to keep up with emerging technologies, and corporate capture. While smaller-scale democratic experiments are growing, locally and globally, they remain far too fractured to handle consequential governance decisions at scale.

This puts us in a bind. Clearly, we could be doing a better job directing the development of technology towards collective human flourishing—this may be one of the greatest challenges of our time. If actually existing democracy is so riddled with flaws, it doesn’t seem up to the task. This is what rings hollow in many calls to “democratize technology”: Given the litany of complaints, why subject one seemingly broken system to governance by another?…(More)”.

The fear of technology-driven unemployment and its empirical base


Article by Kerstin Hötte, Melline Somers and Angelos Theodorakopoulos:”New technologies may replace human labour, but can simultaneously create jobs if workers are needed to use these technologies or if new economic activities emerge. At the same time, technology-driven productivity growth may increase disposable income, stimulating a demand-induced employment expansion. Based on a systematic review of the empirical literature on technological change and its impact on employment published in the past four decades, this column suggests that the empirical support for the labour-creating effects of technological change dominates that for labour-replacement…(More)”.

The Adoption of Innovation


Article by Benjamin Kumpf & Emma Proud: “The adoption of innovation means an innovation has ceased to be “innovative.” It means that a method, technology, or approach to a problem has moved from the experimental edges of an organization to the core of its work: no longer a novelty, but something normal and institutionalized.

However, the concept of adoption is rarely discussed, and the experience and know-how to bring it about is even less common. While an increasing evidence base has been developed on adopting digital systems in development and public sector organizations, as well as literature on organizational reform, little has been published on strategically moving approaches and technologies out of the innovation space to the mainstream of how organizations work. The most relevant insights come from institutionalizing behavioral insights in governments, mainly in public sector entities in the global north. This gap makes it all the more important to surface the challenges, opportunities, and factors that enable adoption, as well as the barriers and roadblocks that impede it….

Adoption is not the same as scaling. Broadly speaking, scaling means “taking successful projects, programs, or policies and expanding, adapting, and sustaining them in different ways over time for greater development impact,” as the authors of the 2020 Focus Brief on Scaling-Up put it. But scaling tends to involve different players and focuses on a specific service, product, or delivery model. For example, SASA! Raising Voices is a community mobilization approach to address and reduce gender-based violence which was first pioneered in Tanzania, but after being rigorously evaluated, has since then adapted in at least 30 countries by more than 75 organizations around the world…(More)”.

Toward a Demand-Driven, Collaborative Data Agenda for Adolescent Mental Health


Paper by Stefaan Verhulst et al: “Existing datasets and research in the field of adolescent mental health do not always meet the needs of practitioners, policymakers, and program implementers, particularly in the context of vulnerable populations. Here, we introduce a collaborative, demand-driven methodology for the development of a strategic adolescent mental health research agenda. Ultimately, this agenda aims to guide future data sharing and collection efforts that meet the most pressing data needs of key stakeholders…

We conducted a rapid literature search to summarize common themes in adolescent mental health research into a “topic map”. We then hosted two virtual workshops with a range of international experts to discuss the topic map and identify shared priorities for future collaboration and research…

Our topic map identifies 10 major themes in adolescent mental health, organized into system-level, community-level, and individual-level categories. The engagement of cross-sectoral experts resulted in the validation of the mapping exercise, critical insights for refining the topic map, and a collaborative list of priorities for future research…

This innovative agile methodology enables a focused deliberation with diverse stakeholders and can serve as the starting point for data generation and collaboration practices, both in the field of adolescent mental health and other topics…(More)”.

Designing Human-Centric AI Experiences


Book by Akshay Kore: “User experience (UX) design practices have seen a fundamental shift as more and more software products incorporate machine learning (ML) components and artificial intelligence (AI) algorithms at their core. This book will probe into UX design’s role in making technologies inclusive and enabling user collaboration with AI.  

AI/ML-based systems have changed the way of traditional UX design. Instead of programming a method to do a specific action, creators of these systems provide data and nurture them to curate outcomes based on inputs. These systems are dynamic and while AI systems change over time, their user experience, in many cases, does not adapt to this dynamic nature.  

Applied UX Design for Artificial Intelligence will explore this problem, addressing the challenges and opportunities in UX design for AI/ML systems, look at best practices for designers, managers, and product creators and showcase how individuals from a non-technical background can collaborate effectively with AI and Machine learning teams…(More)”.

Algorithms for Decision Making


Book by Mykel J. Kochenderfer, Tim A. Wheeler and Kyle H. Wray: “Automated decision-making systems or decision-support systems—used in applications that range from aircraft collision avoidance to breast cancer screening—must be designed to account for various sources of uncertainty while carefully balancing multiple objectives. This textbook provides a broad introduction to algorithms for decision making under uncertainty, covering the underlying mathematical problem formulations and the algorithms for solving them.

The book first addresses the problem of reasoning about uncertainty and objectives in simple decisions at a single point in time, and then turns to sequential decision problems in stochastic environments where the outcomes of our actions are uncertain. It goes on to address model uncertainty, when we do not start with a known model and must learn how to act through interaction with the environment; state uncertainty, in which we do not know the current state of the environment due to imperfect perceptual information; and decision contexts involving multiple agents. The book focuses primarily on planning and reinforcement learning, although some of the techniques presented draw on elements of supervised learning and optimization. Algorithms are implemented in the Julia programming language. Figures, examples, and exercises convey the intuition behind the various approaches presented…(More)”

Who Should Represent Future Generations in Climate Planning?


Paper by Morten Fibieger Byskov and Keith Hyams: “Extreme impacts from climate change are already being felt around the world. The policy choices that we make now will affect not only how high global temperatures rise but also how well-equipped future economies and infrastructures are to cope with these changes. The interests of future generations must therefore be central to climate policy and practice. This raises the questions: Who should represent the interests of future generations with respect to climate change? And according to which criteria should we judge whether a particular candidate would make an appropriate representative for future generations? In this essay, we argue that potential representatives of future generations should satisfy what we call a “hypothetical acceptance criterion,” which requires that the representative could reasonably be expected to be accepted by future generations. This overarching criterion in turn gives rise to two derivative criteria. These are, first, the representative’s epistemic and experiential similarity to future generations, and second, his or her motivation to act on behalf of future generations. We conclude that communities already adversely affected by climate change best satisfy these criteria and are therefore able to command the hypothetical acceptance of future generations…(More)”.

Can open-source technologies support open societies?


Report by Victoria Welborn, and George Ingram: “In the 2020 “Roadmap for Digital Cooperation,” U.N. Secretary General António Guterres highlighted digital public goods (DPGs) as a key lever in maximizing the full potential of digital technology to accelerate progress toward the Sustainable Development Goals (SDGs) while also helping overcome some of its persistent challenges. 

The Roadmap rightly pointed to the fact that, as with any new technology, there are risks around digital technologies that might be counterproductive to fostering prosperous, inclusive, and resilient societies. In fact, without intentional action by the global community, digital technologies may more naturally exacerbate exclusion and inequality by undermining trust in critical institutions, allowing consolidation of control and economic value by the powerful, and eroding social norms through breaches of privacy and disinformation campaigns. 

Just as the pandemic has served to highlight the opportunity for digital technologies to reimagine and expand the reach of government service delivery, so too has it surfaced specific risks that are hallmarks of closed societies and authoritarian states—creating new pathways to government surveillance, reinforcing existing socioeconomic inequalities, and enabling the rapid proliferation of disinformation. Why then—in the face of these real risks—focus on the role of digital public goods in development?

As the Roadmap noted, DPGs are “open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the SDGs.”[1] There are a number of factors why such products have unique potential to accelerate development efforts, including widely recognized benefits related to more efficient and cost effective implementation of technology-enabled development programming. 

Historically, the use of digital solutions for development in low- and middle-income countries (LMICs) has been supported by donor investments in sector-specific technology systems, reinforcing existing silos and leaving countries with costly, proprietary software solutions with duplicative functionality and little interoperability across government agencies, much less underpinning private sector innovation. These silos are further codified through the development of sector-specific maturity models and metrics. An effective DPG ecosystem has the potential to enable the reuse and improvement of existing tools, thereby lowering overall cost of deploying technology solutions and increasing efficient implementation.

Beyond this proven reusability of DPGs and the associated cost and deployment efficiencies, do DPGs have even more transformational potential? Increasingly, there is interest in DPGs as drivers of inclusion and products through which to standardize and safeguard rights; these opportunities are less understood and remain unproven. To begin to fill that gap, this paper first examines the unique value proposition of DPGs in supporting open societies by advancing more equitable systems and by codifying rights. The paper then considers the persistent challenges to more fully realizing this opportunity and offers some recommendations for how to address these challenges…(More)”.

We don’t have a hundred biases, we have the wrong model


Blog by Jason Collins: “…Behavioral economics today is famous for its increasingly large collection of deviations from rationality, or, as they are often called, ‘biases’. While useful in applied work, it is time to shift our focus from collecting deviations from a model of rationality that we know is not true. Rather, we need to develop new theories of human decision to progress behavioral economics as a science. We need heliocentrism. 

The dominant model of human decision-making across many disciplines, including my own, economics, is the rational-actor model. People make decisions based on their preferences and the constraints that they face. Whether implicitly or explicitly, they typically have the computational power to calculate the best decision and the willpower to carry it out. It’s a fiction but a useful one.

As has become broadly known through the growth of behavioral economics, there are many deviations from this model. (I am going to use the term behavioral economics through this article as a shorthand for the field that undoubtedly extends beyond economics to social psychology, behavioral science, and more.) This list of deviations has grown to the extent that if you visit the Wikipedia page ‘List of Cognitive Biases’ you will now see in excess of 200 biases and ‘effects’. These range from the classics described in the seminal papers of Amos Tversky and Daniel Kahneman through to the obscure.

We are still at the collection-of-deviations stage. There are not 200 human biases. There are 200 deviations from the wrong model…(More)”

AI ethics: the case for including animals


Paper by Peter Singer & Yip Fai Tse: “The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals…(More)”.