How Moral Can A.I. Really Be?


Article by Paul Bloom: “…The problem isn’t just that people do terrible things. It’s that people do terrible things that they consider morally good. In their 2014 book “Virtuous Violence,” the anthropologist Alan Fiske and the psychologist Tage Rai argue that violence is often itself a warped expression of morality. “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying,” they write. Their examples include suicide bombings, honor killings, and war. The philosopher Kate Manne, in her book “Down Girl,” makes a similar point about misogynistic violence, arguing that it’s partially rooted in moralistic feelings about women’s “proper” role in society. Are we sure we want A.I.s to be guided by our idea of morality?

Schwitzgebel suspects that A.I. alignment is the wrong paradigm. “What we should want, probably, is not that superintelligent AI align with our mixed-up, messy, and sometimes crappy values but instead that superintelligent AI have ethically good values,” he writes. Perhaps an A.I. could help to teach us new values, rather than absorbing old ones. Stewart, the former graduate student, argued that if researchers treat L.L.M.s as minds and study them psychologically, future A.I. systems could help humans discover moral truths. He imagined some sort of A.I. God—a perfect combination of all the great moral minds, from Buddha to Jesus. A being that’s better than us.

Would humans ever live by values that are supposed to be superior to our own? Perhaps we’ll listen when a super-intelligent agent tells us that we’re wrong about the facts—“this plan will never work; this alternative has a better chance.” But who knows how we’ll respond if one tells us, “You think this plan is right, but it’s actually wrong.” How would you feel if your self-driving car tried to save animals by refusing to take you to a steakhouse? Would a government be happy with a military A.I. that refuses to wage wars it considers unjust? If an A.I. pushed us to prioritize the interests of others over our own, we might ignore it; if it forced us to do something that we consider plainly wrong, we would consider its morality arbitrary and cruel, to the point of being immoral. Perhaps we would accept such perverse demands from God, but we are unlikely to give this sort of deference to our own creations. We want alignment with our own values, then, not because they are the morally best ones, but because they are ours…(More)”

Informing Decisionmakers in Real Time


Article by Robert M. Groves: “In response, the National Science Foundation (NSF) proposed the creation of a complementary group to provide decisionmakers at all levels with the best available evidence from the social sciences to inform pandemic policymaking. In May 2020, with funding from NSF and additional support from the Alfred P. Sloan Foundation and the David and Lucile Packard Foundation, NASEM established the Societal Experts Action Network (SEAN) to connect “decisionmakers grappling with difficult issues to the evidence, trends, and expert guidance that can help them lead their communities and speed their recovery.” We chose to build a network because of the widespread recognition that no one small group of social scientists would have the expertise or the bandwidth to answer all the questions facing decisionmakers. What was needed was a structure that enabled an ongoing feedback loop between researchers and decisionmakers. This structure would foster the integration of evidence, research, and advice in real time, which broke with NASEM’s traditional form of aggregating expert guidance over lengthier periods.

In its first phase, SEAN’s executive committee set about building a network that could both gather and disseminate knowledge. To start, we brought in organizations of decisionmakers—including the National Association of Counties, the National League of Cities, the International City/County Management Association, and the National Conference of State Legislatures—to solicit their questions. Then we added capacity to the network by inviting social and behavioral organizations—like the National Bureau of Economic Research, the National Hazards Center at the University of Colorado Boulder, the Kaiser Family Foundation, the National Opinion Research Center at the University of Chicago, The Policy Lab at Brown University, and Testing for America—to join and respond to questions and disseminate guidance. In this way, SEAN connected teams of experts with evidence and answers to leaders and communities looking for advice…(More)”.

WikiCrow: Automating Synthesis of Human Scientific Knowledge


About: “As scientists, we stand on the shoulders of giants. Scientific progress requires curation and synthesis of prior knowledge and experimental results. However, the scientific literature is so expansive that synthesis, the comprehensive combination of ideas and results, is a bottleneck. The ability of large language models to comprehend and summarize natural language will  transform science by automating the synthesis of scientific knowledge at scale. Yet current LLMs are limited by hallucinations, lack access to the most up-to-date information, and do not provide reliable references for statements.

Here, we present WikiCrow, an automated system that can synthesize cited Wikipedia-style summaries for technical topics from the scientific literature. WikiCrow is built on top of Future House’s internal LLM agent platform, PaperQA, which in our testing, achieves state-of-the-art (SOTA) performance on a retrieval-focused version of PubMedQA and other benchmarks, including a new retrieval-first benchmark, LitQA, developed internally to evaluate systems retrieving full-text PDFs across the entire scientific literature.

As a demonstration of the potential for AI to impact scientific practice, we use WikiCrow to generate draft articles for the 15,616 human protein-coding genes that currently lack Wikipedia articles, or that have article stubs. WikiCrow creates articles in 8 minutes, is much more consistent than human editors at citing its sources, and makes incorrect inferences or statements about 9% of the time, a number that we expect to improve as we mature our systems. WikiCrow will be a foundational tool for the AI Scientists we plan to build in the coming years, and will help us to democratize access to scientific research…(More)”.

How to make data open? Stop overlooking librarians


Article by Jessica Farrell: “The ‘Year of Open Science’, as declared by the US Office of Science and Technology Policy (OSTP), is now wrapping up. This followed an August 2022 memo from OSTP acting director Alondra Nelson, which mandated that data and peer-reviewed publications from federally funded research should be made freely accessible by the end of 2025. Federal agencies are required to publish full plans for the switch by the end of 2024.

But the specifics of how data will be preserved and made publicly available are far from being nailed down. I worked in archives for ten years and now facilitate two digital-archiving communities, the Software Preservation Network and BitCurator Consortium, at Educopia in Atlanta, Georgia. The expertise of people such as myself is often overlooked. More open-science projects need to integrate digital archivists and librarians, to capitalize on the tools and approaches that we have already created to make knowledge accessible and open to the public.How to make your scientific data accessible, discoverable and useful

Making data open and ‘FAIR’ — findable, accessible, interoperable and reusable — poses technical, legal, organizational and financial questions. How can organizations best coordinate to ensure universal access to disparate data? Who will do that work? How can we ensure that the data remain open long after grant funding runs dry?

Many archivists agree that technical questions are the most solvable, given enough funding to cover the labour involved. But they are nonetheless complex. Ideally, any open research should be testable for reproducibility, but re-running scripts or procedures might not be possible unless all of the required coding libraries and environments used to analyse the data have also been preserved. Besides the contents of spreadsheets and databases, scientific-research data can include 2D or 3D images, audio, video, websites and other digital media, all in a variety of formats. Some of these might be accessible only with proprietary or outdated software…(More)”.

Artificial Intelligence and the City


Book edited by Federico Cugurullo, Federico Caprotti, Matthew Cook, Andrew Karvonen, Pauline McGuirk, and Simon Marvin: “This book explores in theory and practice how artificial intelligence (AI) intersects with and alters the city. Drawing upon a range of urban disciplines and case studies, the chapters reveal the multitude of repercussions that AI is having on urban society, urban infrastructure, urban governance, urban planning and urban sustainability.

Contributors also examine how the city, far from being a passive recipient of new technologies, is influencing and reframing AI through subtle processes of co-constitution. The book advances three main contributions and arguments:

  • First, it provides empirical evidence of the emergence of a post-smart trajectory for cities in which new material and decision-making capabilities are being assembled through multiple AIs.
  • Second, it stresses the importance of understanding the mutually constitutive relations between the new experiences enabled by AI technology and the urban context.
  • Third, it engages with the concepts required to clarify the opaque relations that exist between AI and the city, as well as how to make sense of these relations from a theoretical perspective…(More)”.

After USTR’s Move, Global Governance of Digital Trade Is Fraught with Unknowns


Article by Patrick Leblond: “On October 25, the United States announced at the World Trade Organization (WTO) that it was dropping its support for provisions meant to promote the free flow of data across borders. Also abandoned were efforts to continue negotiations on international e-commerce, to protect the source code in applications and algorithms (the so-called Joint Statement Initiative process).

According to the Office of the US Trade Representative (USTR): “In order to provide enough policy space for those debates to unfold, the United States has removed its support for proposals that might prejudice or hinder those domestic policy considerations.” In other words, the domestic regulation of data, privacy, artificial intelligence, online content and the like, seems to have taken precedence over unhindered international digital trade, which the United States previously strongly defended in trade agreements such as the Trans-Pacific Partnership (TPP) and the Canada-United States-Mexico Agreement (CUSMA)…

One pathway for the future sees the digital governance noodle bowl getting bigger and messier. In this scenario, international digital trade suffers. Agreements continue proliferating but remain ineffective at fostering cross-border digital trade: either they remain hortatory with attempts at cooperation on non-strategic issues, or no one pays attention to the binding provisions because business can’t keep up and governments want to retain their “policy space.” After all, why has there not yet been any dispute launched based on binding provisions in a digital trade agreement (either on its own or as part of a larger trade deal) when there has been increasing digital fragmentation?

The other pathway leads to the creation of a new international standards-setting and governance body (call it an International Digital Standards Board), like there exists for banking and finance. Countries that are members of such an international organization and effectively apply the commonly agreed standards become part of a single digital area where they can conduct cross-border digital trade without impediments. This is the only way to realize the G7’s “data free flow with trust” vision, originally proposed by Japan…(More)”.

Steering Responsible AI: A Case for Algorithmic Pluralism


Paper by Stefaan G. Verhulst: “In this paper, I examine questions surrounding AI neutrality through the prism of existing literature and scholarship about mediation and media pluralism. Such traditions, I argue, provide a valuable theoretical framework for how we should approach the (likely) impending era of AI mediation. In particular, I suggest examining further the notion of algorithmic pluralism. Contrasting this notion to the dominant idea of algorithmic transparency, I seek to describe what algorithmic pluralism may be, and present both its opportunities and challenges. Implemented thoughtfully and responsibly, I argue, Algorithmic or AI pluralism has the potential to sustain the diversity, multiplicity, and inclusiveness that are so vital to democracy…(More)”.

‘Turning conflicts into co-creation’: Taiwan government harnesses digital policy for democracy


Article by  Si Ying Thian: “Assistive intelligence and language models can help facilitate nuanced conversations because the human brain simply cannot process 1,000 different positions, said Audrey Tang, Taiwan’s Digital Minister in charge of the Ministry of Digital Affairs (MODA).  

Tang was speaking at a webinar about policymaking in the digital age, hosted by LSE IDEAS, the think tank of the London School of Economics, on 1 December 2023.  

She cited Talk to the City, a large language model that transforms transcripts from a variety of datasets into clusters of similar opinions, as an example of a technology that has helped increase collaboration and diversity without losing the ability to scale…

“The idea is to establish value-based, long-term collaborations based on the idea of public code. This is evident in many of our government websites, which very much look like the UK’s,” said Tang. 

Public code is defined by Foundation of Public Code as an open-source software developed by public organisations, together with policy and guidance needed for collaboration and reuse…

The government’s commitment to open source is also evident in its rollout of the Taiwan Employment Gold Card, which integrates a flexible work permit, a residence visa for up to three years, and eligibility for national health insurance and income tax reduction.  

According to Tang, the Taiwan government invites anyone with experience of eight years or more in contributing to open source or a Web3 publicly available ledger to enrol in the residency program…(More)”.

Want to know if your data are managed responsibly? Here are 15 questions to help you find out


Article by P. Alison Paprica et al: “As the volume and variety of data about people increases, so does the number of ideas about how data might be used. Studies show that many people want their data to be used for public benefit.

However, the research also shows that public support for use of data is conditional, and only given when risks such as those related to privacycommercial exploitation and artificial intelligence misuse are addressed.

It takes a lot of work for organizations to establish data governance and management practices that mitigate risks while also encouraging beneficial uses of data. So much so, that it can be challenging for responsible organizations to communicate their data trustworthiness without providing an overwhelming amount of technical and legal details.

To address this challenge our team undertook a multiyear project to identify, refine and publish a short list of essential requirements for responsible data stewardship.

Our 15 minimum specification requirements (min specs) are based on a review of the scientific literature and the practices of 23 different data-focused organizations and initiatives.

As part of our project, we compiled over 70 public resources, including examples of organizations that address the full list of min specs: ICES, the Hartford Data Collaborative and the New Brunswick Institute for Research, Data and Training.

Our hope is that information related to the min specs will help organizations and data-sharing initiatives share best practices and learn from each other to improve their governance and management of data…(More)”.

Open data ecosystems: what models to co-create service innovations in smart cities?


Paper by Arthur Sarazin: “While smart cities are recently providing open data, how to organise the collective creation of data, knowledge and related products and services produced from this collective resource, still remains to be thought. This paper aims at gathering the literature review on open data ecosystems to tackle the following research question: what models can be imagined to stimulate the collective co-creation of services between smart cities’ stakeholders acting as providers and users of open data? Such issue is currently at stake in many municipalities such as Lisbon which decided to position itself as a platform (O’Reilly, 2010) in the local digital ecosystem. With the implementation of its City Operation Center (COI), Lisbon’s municipality provides an Information Infrastructure (Bowker et al., 2009) to many different types of actors such as telecom companies, municipalities, energy utilities or transport companies. Through this infrastructure, Lisbon encourages such actors to gather, integrate and release heterogeneous datasets and tries to orchestrate synergies among them so data-driven solution to urban problems can emerge (Carvalho and Vale, 2018). The remaining question being: what models for the municipalities such as Lisbon to lean on so as to drive this cutting-edge type of service innovation?…(More)”.