What It Takes to Build Democratic Institutions


Article by Daron Acemoglu: “Chile’s failure to draft a new constitution that enjoys widespread support from voters is the predictable result of allowing partisans and ideologues to lead the process. Democratic institutions are built by delivering what ordinary voters expect and demand from government, as the history of Nordic social democracy shows…

There are plenty of good models around to help both developing and industrialized countries build better democratic institutions. But with its abortive attempts to draft a new constitution, Chile is offering a lesson in what to avoid.

Though it is one of the richest countries in Latin America, Chile is still suffering from the legacy of General Augusto Pinochet’s brutal dictatorship and historic inequalities. The country has made some progress in building democratic institutions since the 1988 plebiscite that began the transition from authoritarianism, and education and social programs have reduced income inequality. But major problems remain. There are deep inequalities not just in income, but also in access to government services, high-quality educational resources, and labor-market opportunities. Moreover, Chile still has the constitution that Pinochet imposed in 1980.

Yet while it seems natural to start anew, Chile has gone about it the wrong way. Following a 2020 referendum that showed overwhelming support for drafting a new constitution, it entrusted the process to a convention of elected delegates. But only 43% of voters turned out for the 2021 election to fill the convention, and many of the candidates were from far-left circles with strong ideological commitments to draft a constitution that would crack down on business and establish myriad new rights for different communities. When the resulting document was put to a vote, 62% of Chileans rejected it…(More)”

What does it mean to trust a technology?


Article by Jack Stilgoe: “A survey published in October 2023 revealed what seemed to be a paradox. Over the past decade, self-driving vehicles have improved immeasurably, but public trust in the technology is low and falling. Only 37% of Americans said they would be comfortable riding in a self- driving vehicle, down from 39% in 2022 and 41% in 2021. Those that have used the technology express more enthusiasm, but the rest have seemingly had their confidence shaken by the failure of the technology to live up to its hype.

Purveyors and regulators of any new technology are likely to worry about public trust. In the short term, they worry that people won’t want to make use of new innovations. But they also worry that a public backlash might jeopardize not just a single company but a whole area of technological innovation. Excitement about artificial intelligence (AI) has been accompanied by a concern about the need to “build trust” in the technology. Trust—letting one’s guard down despite incomplete information—is vital, but innovators must not take it for granted. Nor can it be circumvented through clever engineering. When cryptocurrency enthusiasts call their technology “trustless” because they think it solves age-old problems of banking (an unavoidably imperfect social institution), we should at least view them with skepticism.

For those concerned about public trust and new technologies, social science has some important lessons. The first is that people trust people, not things. When we board an airplane or agree to get vaccinated, we are placing our trust not in these objects but in the institutions that govern them. We trust that professionals are well-trained; we trust that regulators have assessed the risks; we trust that, if something goes wrong, someone will be held accountable, harms will be compensated, and mistakes will be rectified. Societies can no longer rely on the face-to-face interactions that once allowed individuals to do business. So it is more important than ever that faceless institutions are designed and continuously monitored to realize the benefits of new technologies while mitigating the risks….(More)”.

The new star wars over satellites


Article by Peggy Hollinger: “There is a battle brewing in space. In one corner you have the billionaires building giant satellite broadband constellations in low earth orbit (LEO) — Elon Musk with SpaceX’s Starlink and Jeff Bezos with Project Kuiper. 

In the other corner stand the traditional fixed satellite operators such as ViaSat and SES — but also a number of nations increasingly uncomfortable with the way in which the new space economy is evolving. In other words, with the dominance of US mega constellations in a strategic region of space.

The first shots were fired in late November at the World Radiocommunications Conference in Dubai. Every four years, global regulators and industry meet to review international regulations on the use of radio spectrum. 

For those who have only a vague idea of what spectrum is, it is the name for the radio airwaves that carry data wirelessly to enable a vast range of services — from television broadcasting to WiFi, navigation to mobile communications.

Most people are inclined to think that the airwaves have infinite capacity to connect us. But, like water, spectrum is a finite resource and much of it has already been allocated to specific uses. So operators have to transmit signals on shared bands of spectrum — on the promise that their transmissions will not interfere with others. 

Now SpaceX, Kuiper and others operating in LEO are pushing to loosen rules designed to prevent their signals from interfering with those of traditional operators in higher orbits. These rules impose caps on the power used to transmit signals, which facilitate spectrum sharing but also constrain the amount of data they can send. LEO operators say the rules, designed 25 years ago, are outdated. They argue that new technology would allow higher power levels — and greater capacity for customers — without degrading networks of the traditional fixed satellite systems operating in geostationary orbit, at altitudes of 36,000km.

It is perhaps not a surprise that a proposal to make LEO constellations more competitive drew protests from geo operators. Some, such as US-based Hughes Network Systems, have admitted they are already losing customers to Starlink.

What was surprising, however, was the strong opposition from countries such as Brazil, Indonesia, Japan and others…(More)”.

The Rise of Cyber-Physical Systems


Article by Chandrakant D. Patel: “Cyber-physical systems are a systemic integration of physical and cyber technologies. To name one example, a self-driving car is an integration of physical technologies, such as motors, batteries, actuators, and sensors, and cyber technologies, like communication, computation, inference, and closed-loop control. Data flow from physical to cyber technologies results in systemic integration and the desired driving experience. Cyber-physical systems are becoming prevalent in a range of sectors, such as power, water, waste, transportation, healthcare, agriculture, and manufacturing. We have entered the cyber-physical age. However, we stand unprepared for this moment due to systemic under-allocation in the physical sciences and the lack of a truly multidisciplinary engineering curriculum.  While there are many factors that contribute to the rise of cyber-physical systems, societal challenges stemming from imbalances between supply and demand are becoming a very prominent one. These imbalances are caused by social, economic, and ecological trends that hamper the delivery of basic goods and services. Examples of trends leading to imbalances between supply and demand are resource constraints, aging population, human capital constraints, a lack of subject matter experts in critical fields, physical security risks, supply-chain and supply-side resiliency, and externalities such as pandemics and environmental pollution. With respect to the lack of subject matter experts, consider the supply of cardiothoracic surgeons. The United States has about 4000 cardiothoracic surgeons, a sub-specialization that takes 20 years of education and hands-on training, for a population of 333 million. Similar imbalances in subject matter experts in healthcare, power, water, waste, and transport systems are occurring as a result of aging population. Compounding this challenge is the market-driven pay discrepancy that has attracted our youth to software jobs, such as those in social media, which pay much more relative to the salaries for a resident in general surgery or an early-career civil engineer. While it is possible that the market will shift to value infrastructure- and healthcare-related jobs, the time it takes to train “hands-on” contributors (e.g., engineers and technicians) in physical sciences and life sciences is substantial, ranging from 5 years (technicians requiring industry training) to 20 years (sub-specialized personnel like cardiothoracic surgeons)…(More)”.

Forget technology — politicians pose the gravest misinformation threat


Article by Rasmus Nielsen: “This is set to be a big election year, including in India, Mexico, the US, and probably the UK. People will rightly be on their guard for misinformation, but much of the policy discussion on the topic ignores the most important source: members of the political elite.

As a social scientist working on political communication, I have spent years in these debates — which continue to be remarkably disconnected from what we know from research. Academic findings repeatedly underline the actual impact of politics, while policy documents focus persistently on the possible impact of new technologies.

Most recently, Britain’s National Cyber Security Centre (NCSC) has warned of how “AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced”. This is similar to warnings from many other public authorities, which ignore the misinformation from the most senior levels of domestic politics. In the US, the Washington Post stopped counting after documenting at least 30,573 false or misleading claims made by Donald Trump as president. In the UK, the non-profit FullFact has reported that as many as 50 MPs — including two prime ministers, cabinet ministers and shadow cabinet ministers — failed to correct false, unevidenced or misleading claims in 2022 alone, despite repeated calls to do so.

These are actual problems of misinformation, and the phenomenon is not new. Both George W Bush and Barack Obama’s administrations obfuscated on Afghanistan. Bush’s government and that of his UK counterpart Tony Blair advanced false and misleading claims in the run-up to the Iraq war. Prominent politicians have, over the years, denied the reality of human-induced climate change, proposed quack remedies for Covid-19, and so much more. These are examples of misinformation, and, at their most egregious, of disinformation — defined as spreading false or misleading information for political advantage or profit.

This basic point is strikingly absent from many policy documents — the NCSC report, for example, has nothing to say about domestic politics. It is not alone. Take the US Surgeon General’s 2021 advisory on confronting health misinformation which calls for a “whole-of-society” approach — and yet contains nothing on politicians and curiously omits the many misleading claims made by the sitting president during the pandemic, including touting hydroxychloroquine as a potential treatment…(More)”.

Introduction to Digital Humanism


Open access textbook edited by Hannes Werthner et al: “…introduces and defines digital humanism from a diverse range of disciplines. Following the 2019 Vienna Manifesto, the book calls for a digital humanism that describes, analyzes, and, most importantly, influences the complex interplay of technology and humankind, for a better society and life, fully respecting universal human rights.The book is organized in three parts: Part I “Background” provides the multidisciplinary background needed to understand digital humanism in its philosophical, cultural, technological, historical, social, and economic dimensions. The goal is to present the necessary knowledge upon which an effective interdisciplinary discourse on digital humanism can be founded. Part II “Digital Humanism – a System’s View” focuses on an in-depth presentation and discussion of the main digital humanism concerns arising in current digital systems. The goal of this part is to make readers aware and sensitive to these issues, including e.g. the control and autonomy of AI systems, privacy and security, and the role of governance. Part III “Critical and Societal Issues of Digital Systems” delves into critical societal issues raised by advances of digital technologies. While the public debate in the past has often focused on them separately, especially when they became visible through sensational events the aim here is to shed light on the entire landscape and show their interconnected relationships. This includes issues such as AI and ethics, fairness and bias, privacy and surveillance, platform power and democracy.

This textbook is intended for students, teachers, and policy makers interested in digital humanism. It is designed for stand-alone and for complementary courses in computer science, or curricula in science, engineering, humanities and social sciences. Each chapter includes questions for students and an annotated reading list to dive deeper into the associated chapter material. The book aims to provide readers with as wide an exposure as possible to digital advances and their consequences for humanity. It includes constructive ideas and approaches that seek to ensure that our collective digital future is determined through human agency…(More)”.

Knightian Uncertainty


Paper by Cass R. Sunstein: “In 1921, John Maynard Keynes and Frank Knight independently insisted on the importance of making a distinction between uncertainty and risk. Keynes referred to matters about which “there is no scientific basis on which to form any calculable probability whatever.” Knight claimed that “Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated.” Knightian uncertainty exists when people cannot assign probabilities to imaginable outcomes. People might know that a course of action might produce bad outcomes A, B, C, D, and E, without knowing much or anything about the probability of each. Contrary to a standard view in economics, Knightian uncertainty is real. Dogs face Knightian uncertainty; horses and elephants face it; human beings face it; in particular, human beings who make policy, or develop regulations, sometimes face it. Knightian uncertainty poses challenging and unresolved issues for decision theory and regulatory practice. It bears on many problems, potentially including those raised by artificial intelligence. It is tempting to seek to eliminate the worst-case scenario (and thus to adopt the maximin rule), but serious problems arise if eliminating the worst-case scenario would (1) impose high risks and costs, (2) eliminate large benefits or potential “miracles,” or (3) create uncertain risks…(More)”.

The Transferability Question


Report by Geoff Mulgan: “How should we think about the transferability of ideas and methods? If something works in one place and one time, how do we know if it, or some variant of it, will work in another place or another time?

This – the transferability question – is one that many organisations face: businesses, from retailers and taxi firms to restaurants and accountants wanting to expand to other regions or countries; governments wanting to adopt and adapt policies from elsewhere; and professions like doctors, wanting to know whether a kind of surgery, or a smoking cessation programme, will work in another context…

Here I draw on this literature to suggest not so much a generalisable method but rather an approach that starts by asking four basic questions of any promising idea:  

  • SPREAD: has the idea already spread to diverse contexts and been shown to work?  
  • ESSENTIALS: do we know what the essentials are, the crucial ingredients that make it effective?  
  • EASE: how easy is it to adapt or adopt (in other words, how many other things need to change for it to be implemented successfully)? 
  • RELEVANCE: how relevant is the evidence (or how similar is the context of evidence to the context of action)? 

Asking these questions is a protection against the vice of hoping that you can just ‘cut and paste’ an idea from elsewhere, but also an encouragement to be hungry for good ideas that can be adopted or adapted.    

I conclude by arguing that it is healthy for any society or government to assume that there are good ideas that could adopted or adapted; it’s healthy to cultivate a hunger to learn; healthy to understand methods for analysing what aspects of an idea or model could be transferable; and great value in having institutions that are good at promoting and spreading ideas, at adoption and adaptation as well as innovation…(More)”.

Foundational Research Gaps and Future Directions for Digital Twins


Report by the National Academy of Engineering; National Academies of Sciences, Engineering, and Medicine: “Across multiple domains of science, engineering, and medicine, excitement is growing about the potential of digital twins to transform scientific research, industrial practices, and many aspects of daily life. A digital twin couples computational models with a physical counterpart to create a system that is dynamically updated through bidirectional data flows as conditions change. Going beyond traditional simulation and modeling, digital twins could enable improved medical decision-making at the individual patient level, predictions of future weather and climate conditions over longer timescales, and safer, more efficient engineering processes. However, many challenges remain before these applications can be realized.

This report identifies the foundational research and resources needed to support the development of digital twin technologies. The report presents critical future research priorities and an interdisciplinary research agenda for the field, including how federal agencies and researchers across domains can best collaborate…(More)”.

Conversing with Congress: An Experiment in AI-Enabled Communication


Blog by Beth Noveck: “Each Member of the US House Representative speaks for 747,184 people – a staggering increase from 50 years ago. In the Senate, this disproportion is even more pronounced: on average each Senator represents 1.6 million more constituents than her predecessor a generation ago. That’s a lower level of representation than any other industrialized democracy.  

As the population grows (over 60% since 1970), so, too, does constituent communications. 

But that communication is not working well. According to the Congressional Management Foundation, this overwhelming communication volume leads to dissatisfaction among voters who feel their views are not adequately considered by their representatives….A pioneering and important new study published in Government Information Quarterly entitled “Can AI communication tools increase legislative responsiveness and trust in democratic institutions?” (Volume 40, Issue 3, June 2023, 101829) from two Cornell researchers is shedding new light on the practical potential for AI to create more meaningful constituent communication….Depending on the treatment group they either were or were not told when replies were AI-drafted.

Their findings are telling. Standard, generic responses fare poorly in gaining trust. In contrast, all AI-assisted responses, particularly those with human involvement, significantly boost trust. “Legislative correspondence generated by AI with human oversight may be received favorably.” 

Screenshot 2023 12 12 at 4.21.16 Pm

While the study found AI-assisted replies to be more trustworthy, it also explored how the quality of these replies impacts perception. When they conducted this study, ChatGPT was still in its infancy and more prone to linguistic hallucinations so they also tested in a second experiment how people perceived higher, more relevant and responsive replies against lower quality, irrelevant replies drafted with AI…(More)”.