Book by Ramon Gras, and Jeremy Burke: “The Aretian team, a spin off company from the Harvard Innovation Lab, has developed a city science methodology to evaluate the relationship between city form and urban performance. This book illuminates the relationship between a city’s spatial design and quality of life it affords for the general population. By measuring innovation economies to design Innovation Districts, social networks and patterns to help form organization patterns, and city topology, morphology, entropy and scale to create 15 Minute Cities are some of the frameworks presented in this volume.
Therefore, urban designers, architects and engineers will be able to successfully tackle complex urban design challenges by using the authors’ frameworks and findings in their own work. Case studies help to present key insights from advanced, data-driven geospatial analyses of cities around the world in an illustrative manner. This inaugural book by Aretian Urban Analytics and Design will give readers a new set of tools to learn from, expand, and develop for the healthy growth of cities and regions around the world…(More)”.
What Is Public Trust in the Health System? Insights into Health Data Use
Open Access Book by Felix Gille: “This book explores the concept of public trust in health systems.
In the context of recent events, including public response to interventions to tackle the COVID-19 pandemic, vaccination uptake and the use of health data and digital health, this important book uses empirical evidence to address why public trust is vital to a well-functioning health system.
In doing so, it provides a comprehensive contemporary explanation of public trust, how it affects health systems and how it can be nurtured and maintained as an integral component of health system governance…(More)”.
Chatbots May ‘Hallucinate’ More Often Than Many Realize
Cade Metz at The New York Times: “When the San Francisco start-up OpenAI unveiled its ChatGPT online chatbot late last year, millions were wowed by the humanlike way it answered questions, wrote poetry and discussed almost any topic. But most people were slow to realize that this new kind of chatbot often makes things up.
When Google introduced a similar chatbot several weeks later, it spewed nonsense about the James Webb telescope. The next day, Microsoft’s new Bing chatbot offered up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish. Then, in March, ChatGPT cited a half dozen fake court cases while writing a 10-page legal brief that a lawyer submitted to a federal judge in Manhattan.
Now a new start-up called Vectara, founded by former Google employees, is trying to figure out how often chatbots veer from the truth. The company’s research estimates that even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent.
Experts call this chatbot behavior “hallucination.” It may not be a problem for people tinkering with chatbots on their personal computers, but it is a serious issue for anyone using this technology with court documents, medical information or sensitive business data.
Because these chatbots can respond to almost any request in an unlimited number of ways, there is no way of definitively determining how often they hallucinate. “You would have to look at all of the world’s information,” said Simon Hughes, the Vectara researcher who led the project…(More)”.
Climate data can save lives. Most countries can’t access it.
Article by Zoya Teirstein: “Earth just experienced one of its hottest, and most damaging, periods on record. Heat waves in the United States, Europe, and China; catastrophic flooding in India, Brazil, Hong Kong, and Libya; and outbreaks of malaria, dengue, and other mosquito-borne illnesses across southern Asia claimed tens of thousands of lives. The vast majority of these deaths could have been averted with the right safeguards in place.
The World Meteorological Organization, or WMO, published a report last week that shows just 11 percent of countries have the full arsenal of tools required to save lives as the impacts of climate change — including deadly weather events, infectious diseases, and respiratory illnesses like asthma — become more extreme. The United Nations climate agency predicts that significant natural disasters will hit the planet 560 times per year by the end of this decade. What’s more, countries that lack early warning systems, such as extreme heat alerts, will see eight times more climate-related deaths than countries that are better prepared. By midcentury, some 50 percent of these deaths will take place in Africa, a continent that is responsible for around 4 percent of the world’s greenhouse gas emissions each year…(More)”.
Smart City Data Governance
OECD Report: “Smart cities leverage technologies, in particular digital, to generate a vast amount of real-time data to inform policy- and decision-making for an efficient and effective public service delivery. Their success largely depends on the availability and effective use of data. However, the amount of data generated is growing more rapidly than governments’ capacity to store and process them, and the growing number of stakeholders involved in data production, analysis and storage pushes cities data management capacity to the limit. Despite the wide range of local and national initiatives to enhance smart city data governance, urban data is still a challenge for national and city governments due to: insufficient financial resources; lack of business models for financing and refinancing of data collection; limited access to skilled experts; the lack of full compliance with the national legislation on data sharing and protection; and data and security risks. Facing these challenges is essential to managing and sharing data sensibly if cities are to boost citizens’ well-being and promote sustainable environments…(More)”
Unintended Consequences of Data-driven public participation: How Low-Traffic Neighborhood planning became polarized
Paper by Alison Powell: “This paper examines how data-driven consultation contributes to dynamics of political polarization, using the case of ‘Low-Traffic Neighborhoods’ in London, UK. It explores how data-driven consultation can facilitate participation, including ‘agonistic data practices” (Crooks and Currie, 2022) that challenge the dominant interpretations of digital data. The paper adds empirical detail to previous studies of agonistic data practices, concluding that agonistic data practices require certain normative conditions to be met, otherwise dissenting data practices can contribute to dynamics of polarization. The results of this paper draw on empirical insights from the political context of the UK to explain how ostensibly democratic processes including data-driven consultation establish some kinds of knowledge as more legitimate than others. Apparently ‘objective’ knowledge, or calculable data, is attributed greater legitimacy than strong feelings or affective narratives. This can displace affective responses to policy decisions into insular social media spaces where polarizing dynamics are at play. Affective polarization, where political difference is solidified through appeals to feeling, creates political distance and the dehumanization of ‘others’. This can help to amplify conspiracy theories that pose risks to democracy and to the overall legitimacy of media environments. These tendencies are exacerbated when processes of consultation prescribe narrow or specific contributions, valorize quantifiable or objective data and create limited room for dissent…(More)”
AI and Democracy’s Digital Identity Crisis
Essay by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”
The Bletchley Declaration
Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023: “In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:
- identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
- building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.
In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024…(More)”.
Markets and the Good
Introduction to Special Issue by Jay Tolson: “How, then, do we think beyond what has come to be the tyranny of economics—or perhaps more accurately, how do we put economics in its proper place? Coming at these questions from different angles and different first principles, our authors variously dissect formative economic doctrines (see Kyle Edward Williams, “The Myth of the Friedman Doctrine”) and propose restoring the genius of the American system of capitalism (Jacob Soll, “Hamilton’s System”) or revising the purpose and priorities of the corporation (Michael Lind, “Profit, Power, and Purpose”). Others, in turn, prescribe restraints for the excesses of liberalism (Deirdre Nansen McCloskey “An Economic Theology of Liberalism”) or even an alternative to it, in the restoration of “common good” thinking associated with subsidiarity (Andrew Willard Jones, “Friendship and the Common Good”). Yet others examine how “burnout” and “emotional labor” became status markers and signs of virtue that weaken solidarity among workers of all kinds (Jonathan Malesic, “How We Obscure the Common Plight of Workers”) or the subtle ways in which we have reduced ourselves to cogs in our economic system (Sarah M. Brownsberger, “Name Your Industry—Or Else!”). Collectively, our authors suggest, the reluctance to question and rethink our fundamental economic assumptions and institutions—and their relation to other goods—may pose the greatest threat to real prosperity and human flourishing…(More)”.
Enterprise Value and the Value of Data
Paper by Dan Ciuriak: “Data is often said to be the most valuable commodity of our age. It is a curiosity, therefore, that it remains largely invisible on the balance sheets of companies and largely unmeasured in our national economic accounts. This paper comments on the problems of using cost-based or transactions-based methods to establish value for a nation’s data in the system of national accounts and suggests that this should be complemented with value of economic rents attributable to data. This rent is part of enterprise value; accordingly, an indicator is required as an instrumental variable for the use of data for value creation within firms. The paper argues that traditional accounting looks through the firm to its tangible (and certain intangible) assets; that may no longer be feasible in measuring and understanding the data-driven economy…(More)”