How to make data open? Stop overlooking librarians


Article by Jessica Farrell: “The ‘Year of Open Science’, as declared by the US Office of Science and Technology Policy (OSTP), is now wrapping up. This followed an August 2022 memo from OSTP acting director Alondra Nelson, which mandated that data and peer-reviewed publications from federally funded research should be made freely accessible by the end of 2025. Federal agencies are required to publish full plans for the switch by the end of 2024.

But the specifics of how data will be preserved and made publicly available are far from being nailed down. I worked in archives for ten years and now facilitate two digital-archiving communities, the Software Preservation Network and BitCurator Consortium, at Educopia in Atlanta, Georgia. The expertise of people such as myself is often overlooked. More open-science projects need to integrate digital archivists and librarians, to capitalize on the tools and approaches that we have already created to make knowledge accessible and open to the public.How to make your scientific data accessible, discoverable and useful

Making data open and ‘FAIR’ — findable, accessible, interoperable and reusable — poses technical, legal, organizational and financial questions. How can organizations best coordinate to ensure universal access to disparate data? Who will do that work? How can we ensure that the data remain open long after grant funding runs dry?

Many archivists agree that technical questions are the most solvable, given enough funding to cover the labour involved. But they are nonetheless complex. Ideally, any open research should be testable for reproducibility, but re-running scripts or procedures might not be possible unless all of the required coding libraries and environments used to analyse the data have also been preserved. Besides the contents of spreadsheets and databases, scientific-research data can include 2D or 3D images, audio, video, websites and other digital media, all in a variety of formats. Some of these might be accessible only with proprietary or outdated software…(More)”.

Artificial Intelligence and the City


Book edited by Federico Cugurullo, Federico Caprotti, Matthew Cook, Andrew Karvonen, Pauline McGuirk, and Simon Marvin: “This book explores in theory and practice how artificial intelligence (AI) intersects with and alters the city. Drawing upon a range of urban disciplines and case studies, the chapters reveal the multitude of repercussions that AI is having on urban society, urban infrastructure, urban governance, urban planning and urban sustainability.

Contributors also examine how the city, far from being a passive recipient of new technologies, is influencing and reframing AI through subtle processes of co-constitution. The book advances three main contributions and arguments:

  • First, it provides empirical evidence of the emergence of a post-smart trajectory for cities in which new material and decision-making capabilities are being assembled through multiple AIs.
  • Second, it stresses the importance of understanding the mutually constitutive relations between the new experiences enabled by AI technology and the urban context.
  • Third, it engages with the concepts required to clarify the opaque relations that exist between AI and the city, as well as how to make sense of these relations from a theoretical perspective…(More)”.

After USTR’s Move, Global Governance of Digital Trade Is Fraught with Unknowns


Article by Patrick Leblond: “On October 25, the United States announced at the World Trade Organization (WTO) that it was dropping its support for provisions meant to promote the free flow of data across borders. Also abandoned were efforts to continue negotiations on international e-commerce, to protect the source code in applications and algorithms (the so-called Joint Statement Initiative process).

According to the Office of the US Trade Representative (USTR): “In order to provide enough policy space for those debates to unfold, the United States has removed its support for proposals that might prejudice or hinder those domestic policy considerations.” In other words, the domestic regulation of data, privacy, artificial intelligence, online content and the like, seems to have taken precedence over unhindered international digital trade, which the United States previously strongly defended in trade agreements such as the Trans-Pacific Partnership (TPP) and the Canada-United States-Mexico Agreement (CUSMA)…

One pathway for the future sees the digital governance noodle bowl getting bigger and messier. In this scenario, international digital trade suffers. Agreements continue proliferating but remain ineffective at fostering cross-border digital trade: either they remain hortatory with attempts at cooperation on non-strategic issues, or no one pays attention to the binding provisions because business can’t keep up and governments want to retain their “policy space.” After all, why has there not yet been any dispute launched based on binding provisions in a digital trade agreement (either on its own or as part of a larger trade deal) when there has been increasing digital fragmentation?

The other pathway leads to the creation of a new international standards-setting and governance body (call it an International Digital Standards Board), like there exists for banking and finance. Countries that are members of such an international organization and effectively apply the commonly agreed standards become part of a single digital area where they can conduct cross-border digital trade without impediments. This is the only way to realize the G7’s “data free flow with trust” vision, originally proposed by Japan…(More)”.

Steering Responsible AI: A Case for Algorithmic Pluralism


Paper by Stefaan G. Verhulst: “In this paper, I examine questions surrounding AI neutrality through the prism of existing literature and scholarship about mediation and media pluralism. Such traditions, I argue, provide a valuable theoretical framework for how we should approach the (likely) impending era of AI mediation. In particular, I suggest examining further the notion of algorithmic pluralism. Contrasting this notion to the dominant idea of algorithmic transparency, I seek to describe what algorithmic pluralism may be, and present both its opportunities and challenges. Implemented thoughtfully and responsibly, I argue, Algorithmic or AI pluralism has the potential to sustain the diversity, multiplicity, and inclusiveness that are so vital to democracy…(More)”.

Want to know if your data are managed responsibly? Here are 15 questions to help you find out


Article by P. Alison Paprica et al: “As the volume and variety of data about people increases, so does the number of ideas about how data might be used. Studies show that many people want their data to be used for public benefit.

However, the research also shows that public support for use of data is conditional, and only given when risks such as those related to privacycommercial exploitation and artificial intelligence misuse are addressed.

It takes a lot of work for organizations to establish data governance and management practices that mitigate risks while also encouraging beneficial uses of data. So much so, that it can be challenging for responsible organizations to communicate their data trustworthiness without providing an overwhelming amount of technical and legal details.

To address this challenge our team undertook a multiyear project to identify, refine and publish a short list of essential requirements for responsible data stewardship.

Our 15 minimum specification requirements (min specs) are based on a review of the scientific literature and the practices of 23 different data-focused organizations and initiatives.

As part of our project, we compiled over 70 public resources, including examples of organizations that address the full list of min specs: ICES, the Hartford Data Collaborative and the New Brunswick Institute for Research, Data and Training.

Our hope is that information related to the min specs will help organizations and data-sharing initiatives share best practices and learn from each other to improve their governance and management of data…(More)”.

Open data ecosystems: what models to co-create service innovations in smart cities?


Paper by Arthur Sarazin: “While smart cities are recently providing open data, how to organise the collective creation of data, knowledge and related products and services produced from this collective resource, still remains to be thought. This paper aims at gathering the literature review on open data ecosystems to tackle the following research question: what models can be imagined to stimulate the collective co-creation of services between smart cities’ stakeholders acting as providers and users of open data? Such issue is currently at stake in many municipalities such as Lisbon which decided to position itself as a platform (O’Reilly, 2010) in the local digital ecosystem. With the implementation of its City Operation Center (COI), Lisbon’s municipality provides an Information Infrastructure (Bowker et al., 2009) to many different types of actors such as telecom companies, municipalities, energy utilities or transport companies. Through this infrastructure, Lisbon encourages such actors to gather, integrate and release heterogeneous datasets and tries to orchestrate synergies among them so data-driven solution to urban problems can emerge (Carvalho and Vale, 2018). The remaining question being: what models for the municipalities such as Lisbon to lean on so as to drive this cutting-edge type of service innovation?…(More)”.

Gaza and the Future of Information Warfare


Article by P. W. Singer and Emerson T. Brooking: “The Israel-Hamas war began in the early hours of Saturday, October 7, when Hamas militants and their affiliates stole over the Gazan-Israeli border by tunnel, truck, and hang glider, killed 1,200 people, and abducted over 200 more. Within minutes, graphic imagery and bombastic propaganda began to flood social media platforms. Each shocking video or post from the ground drew new pairs of eyes, sparked horrified reactions around the world, and created demand for more. A second front in the war had been opened online, transforming physical battles covering a few square miles into a globe-spanning information conflict.

In the days that followed, Israel launched its own bloody retaliation against Hamas; its bombardment of cities in the Gaza Strip killed more than 10,000 Palestinians in the first month. With a ground invasion in late October, Israeli forces began to take control of Gazan territory. The virtual battle lines, meanwhile, only became more firmly entrenched. Digital partisans clashed across Facebook, Instagram, X, TikTok, YouTube, Telegram, and other social media platforms, each side battling to be the only one heard and believed, unshakably committed to the righteousness of its own cause.

The physical and digital battlefields are now merged. In modern war, smartphones and cameras transmit accounts of nearly every military action across the global information space. The debates they spur, in turn, affect the real world. They shape public opinion, provide vast amounts of intelligence to actors around the world, and even influence diplomatic and military operational decisions at both the strategic and tactical levels. In our 2018 book, we dubbed this phenomenon “LikeWar,” defined as a political and military competition for command of attention. If cyberwar is the hacking of online networks, LikeWar is the hacking of the people on them, using their likes and shares to make a preferred narrative go viral…(More)”.

Generative AI and Policymaking for the New Frontier


Essay by Beth Noveck: “…Embracing the same responsible experimentation approach taken in Boston and New Jersey and expanding on the examples in those interim policies, this November the state of California issued an executive order and a lengthy but clearly written report, enumerating potential benefits from the use of generative AI.

These include:

  1. Sentiment Analysis — Using generative AI (GenAI) to analyze public feedback on state policies and services.
  2. Summarizing Meetings — GenAI can find the key topics, conclusions, action items and insights.
  3. Improving Benefits Uptake — AI can help identify public program participants who would benefit from additional outreach. GenAI can also identify groups that are disproportionately not accessing services.
  4. Translation — Generative AI can help translate government forms and websites into multiple languages.
  5. Accessibility — GenAI can be used to translate materials, especially educational materials into formats like audio, large print or Braille or to add captions.
  6. Cybersecurity —GenAI models can analyze data to detect and respond to cyber attacks faster and safeguard public infrastructure.
  7. Updating Legacy Technology — Because it can analyze and generate computer code, generative AI can accelerate the upgrading of old computer systems.
  8. Digitizing Services — GenAI can help speed up the creation of new technology. And with GenAI, anyone can create computer code, enabling even nonprogrammers to develop websites and software.
  9. Optimizing Routing — GenAI can analyze traffic patterns and ride requests to improve efficiency of state-managed transportation fleets, such as buses, waste collection trucks or maintenance vehicles.
  10. Improving Sustainability — GenAI can be applied to optimize resource allocation and enhance operational efficiency. GenAI simulation tools could, for example, “model the carbon footprint, water usage and other environmental impacts of major infrastructure projects.”

Because generative AI tools can both create and analyze content, these 10 are just a small subset of the many potential applications of generative AI in governing…(More)”.

Urban Artificial Intelligence: From Real-world Observations to a Paradigm-Shifting Concept


Blog by Hubert Beroche: “Cities are facing unprecedented challenges. The figures are well known: while occupying only 2% of the earth’s surface, urban settlements host more than 50% of the global population and are responsible for 70% of greenhouse emissions. While concentrating most capital and human wealth, they are also places of systemic inequalities (Nelson, 2023), exacerbating and materializing social imbalances. In the meantime, cities have fewer and fewer resources to face those tensions. Increasing environmental constraints, combined with shrinking public budgets, are putting pressure on cities’ capacities. More than ever, urban stakeholders have to do more with less.

In this context, Artificial Intelligence has usually been seen as a much-welcomed technology. This technology can be defined as machines’ ability to perform cognitive functions, mainly through learning algorithms since 2012. First embedded in heavy top-down Smart City projects, AI applications in cities have gradually proliferated under the impetus of various stakeholders. Today’s cities are home to numerous AIs, owned and used by multiple stakeholders to serve different, sometimes divergent, interests.

The diversity of urban AIs in cities is well illustrated in our project co-produced with Cornell Tech: “The Future of Urban AI”. This graph represents different urban AI trends based on The Future of UrbanTech Horizon Scan. Each colored dot represents a major urban tech/urban AI trend, with its ramifications. Some of these trends are opposed but still cohabiting (eg “Dark Plans” and “New Screen Deal”)…(More)”.

Shaping the Future: Indigenous Voices Reshaping Artificial Intelligence in Latin America


Blog by Enzo Maria Le Fevre Cervini: “In a groundbreaking move toward inclusivity and respect for diversity, a comprehensive report “Inteligencia artificial centrada en los pueblos indígenas: perspectivas desde América Latina y el Caribe” authored by Cristina Martinez and Luz Elena Gonzalez has been released by UNESCO, outlining the pivotal role of Indigenous perspectives in shaping the trajectory of Artificial Intelligence (AI) in Latin America. The report, a collaborative effort involving Indigenous communities, researchers, and various stakeholders, emphasizes the need for a fundamental shift in the development of AI technologies, ensuring they align with the values, needs, and priorities of Indigenous peoples.

The core theme of the report revolves around the idea that for AI to be truly respectful of human rights, it must incorporate the perspectives of Indigenous communities in Latin America, the Caribbean, and beyond. Recognizing the UNESCO Recommendation on the Ethics of Artificial Intelligence, the report highlights the urgency of developing a framework of shared responsibility among different actors, urging them to leverage their influence for the collective public interest.

While acknowledging the immense potential of AI in preserving Indigenous identities, conserving cultural heritage, and revitalizing languages, the report notes a critical gap. Many initiatives are often conceived externally, prompting a call to reevaluate these projects to ensure Indigenous leadership, development, and implementation…(More)”.