Paper by Arthur Sarazin: “While smart cities are recently providing open data, how to organise the collective creation of data, knowledge and related products and services produced from this collective resource, still remains to be thought. This paper aims at gathering the literature review on open data ecosystems to tackle the following research question: what models can be imagined to stimulate the collective co-creation of services between smart cities’ stakeholders acting as providers and users of open data? Such issue is currently at stake in many municipalities such as Lisbon which decided to position itself as a platform (O’Reilly, 2010) in the local digital ecosystem. With the implementation of its City Operation Center (COI), Lisbon’s municipality provides an Information Infrastructure (Bowker et al., 2009) to many different types of actors such as telecom companies, municipalities, energy utilities or transport companies. Through this infrastructure, Lisbon encourages such actors to gather, integrate and release heterogeneous datasets and tries to orchestrate synergies among them so data-driven solution to urban problems can emerge (Carvalho and Vale, 2018). The remaining question being: what models for the municipalities such as Lisbon to lean on so as to drive this cutting-edge type of service innovation?…(More)”.
Gaza and the Future of Information Warfare
Article by P. W. Singer and Emerson T. Brooking: “The Israel-Hamas war began in the early hours of Saturday, October 7, when Hamas militants and their affiliates stole over the Gazan-Israeli border by tunnel, truck, and hang glider, killed 1,200 people, and abducted over 200 more. Within minutes, graphic imagery and bombastic propaganda began to flood social media platforms. Each shocking video or post from the ground drew new pairs of eyes, sparked horrified reactions around the world, and created demand for more. A second front in the war had been opened online, transforming physical battles covering a few square miles into a globe-spanning information conflict.
In the days that followed, Israel launched its own bloody retaliation against Hamas; its bombardment of cities in the Gaza Strip killed more than 10,000 Palestinians in the first month. With a ground invasion in late October, Israeli forces began to take control of Gazan territory. The virtual battle lines, meanwhile, only became more firmly entrenched. Digital partisans clashed across Facebook, Instagram, X, TikTok, YouTube, Telegram, and other social media platforms, each side battling to be the only one heard and believed, unshakably committed to the righteousness of its own cause.
The physical and digital battlefields are now merged. In modern war, smartphones and cameras transmit accounts of nearly every military action across the global information space. The debates they spur, in turn, affect the real world. They shape public opinion, provide vast amounts of intelligence to actors around the world, and even influence diplomatic and military operational decisions at both the strategic and tactical levels. In our 2018 book, we dubbed this phenomenon “LikeWar,” defined as a political and military competition for command of attention. If cyberwar is the hacking of online networks, LikeWar is the hacking of the people on them, using their likes and shares to make a preferred narrative go viral…(More)”.
Generative AI and Policymaking for the New Frontier
Essay by Beth Noveck: “…Embracing the same responsible experimentation approach taken in Boston and New Jersey and expanding on the examples in those interim policies, this November the state of California issued an executive order and a lengthy but clearly written report, enumerating potential benefits from the use of generative AI.
These include:
- Sentiment Analysis — Using generative AI (GenAI) to analyze public feedback on state policies and services.
- Summarizing Meetings — GenAI can find the key topics, conclusions, action items and insights.
- Improving Benefits Uptake — AI can help identify public program participants who would benefit from additional outreach. GenAI can also identify groups that are disproportionately not accessing services.
- Translation — Generative AI can help translate government forms and websites into multiple languages.
- Accessibility — GenAI can be used to translate materials, especially educational materials into formats like audio, large print or Braille or to add captions.
- Cybersecurity —GenAI models can analyze data to detect and respond to cyber attacks faster and safeguard public infrastructure.
- Updating Legacy Technology — Because it can analyze and generate computer code, generative AI can accelerate the upgrading of old computer systems.
- Digitizing Services — GenAI can help speed up the creation of new technology. And with GenAI, anyone can create computer code, enabling even nonprogrammers to develop websites and software.
- Optimizing Routing — GenAI can analyze traffic patterns and ride requests to improve efficiency of state-managed transportation fleets, such as buses, waste collection trucks or maintenance vehicles.
- Improving Sustainability — GenAI can be applied to optimize resource allocation and enhance operational efficiency. GenAI simulation tools could, for example, “model the carbon footprint, water usage and other environmental impacts of major infrastructure projects.”
Because generative AI tools can both create and analyze content, these 10 are just a small subset of the many potential applications of generative AI in governing…(More)”.
Urban Artificial Intelligence: From Real-world Observations to a Paradigm-Shifting Concept
Blog by Hubert Beroche: “Cities are facing unprecedented challenges. The figures are well known: while occupying only 2% of the earth’s surface, urban settlements host more than 50% of the global population and are responsible for 70% of greenhouse emissions. While concentrating most capital and human wealth, they are also places of systemic inequalities (Nelson, 2023), exacerbating and materializing social imbalances. In the meantime, cities have fewer and fewer resources to face those tensions. Increasing environmental constraints, combined with shrinking public budgets, are putting pressure on cities’ capacities. More than ever, urban stakeholders have to do more with less.
In this context, Artificial Intelligence has usually been seen as a much-welcomed technology. This technology can be defined as machines’ ability to perform cognitive functions, mainly through learning algorithms since 2012. First embedded in heavy top-down Smart City projects, AI applications in cities have gradually proliferated under the impetus of various stakeholders. Today’s cities are home to numerous AIs, owned and used by multiple stakeholders to serve different, sometimes divergent, interests.
Shaping the Future: Indigenous Voices Reshaping Artificial Intelligence in Latin America
Blog by Enzo Maria Le Fevre Cervini: “In a groundbreaking move toward inclusivity and respect for diversity, a comprehensive report “Inteligencia artificial centrada en los pueblos indígenas: perspectivas desde América Latina y el Caribe” authored by Cristina Martinez and Luz Elena Gonzalez has been released by UNESCO, outlining the pivotal role of Indigenous perspectives in shaping the trajectory of Artificial Intelligence (AI) in Latin America. The report, a collaborative effort involving Indigenous communities, researchers, and various stakeholders, emphasizes the need for a fundamental shift in the development of AI technologies, ensuring they align with the values, needs, and priorities of Indigenous peoples.
The core theme of the report revolves around the idea that for AI to be truly respectful of human rights, it must incorporate the perspectives of Indigenous communities in Latin America, the Caribbean, and beyond. Recognizing the UNESCO Recommendation on the Ethics of Artificial Intelligence, the report highlights the urgency of developing a framework of shared responsibility among different actors, urging them to leverage their influence for the collective public interest.
While acknowledging the immense potential of AI in preserving Indigenous identities, conserving cultural heritage, and revitalizing languages, the report notes a critical gap. Many initiatives are often conceived externally, prompting a call to reevaluate these projects to ensure Indigenous leadership, development, and implementation…(More)”.
A Manifesto on Enforcing Law in the Age of ‘Artificial Intelligence’
Manifesto by the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of ‘Artificial Intelligence’: “… calls for the effective and legitimate enforcement of laws concerning AI systems. In doing so, we recognise the important and complementary role of standards and compliance practices. Whereas the first manifesto focused on the relationship between democratic law-making and technology, this second manifesto shifts focus from the design of law in the age of AI to the enforcement of law. Concretely, we offer 10 recommendations for addressing the key enforcement challenges shared across transatlantic stakeholders. We call on those who support these recommendations to sign this manifesto…(More)”.
Using AI to support people with disability in the labour market
OECD Report: “People with disability face persisting difficulties in the labour market. There are concerns that AI, if managed poorly, could further exacerbate these challenges. Yet, AI also has the potential to create more inclusive and accommodating environments and might help remove some of the barriers faced by people with disability in the labour market. Building on interviews with more than 70 stakeholders, this report explores the potential of AI to foster employment for people with disability, accounting for both the transformative possibilities of AI-powered solutions and the risks attached to the increased use of AI for people with disability. It also identifies obstacles hindering the use of AI and discusses what governments could do to avoid the risks and seize the opportunities of using AI to support people with disability in the labour market…(More)”.
AI and Democracy’s Digital Identity Crisis
Paper by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”.
Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet)
Paper by Eunice Yiu, Eliza Kosoy, and Alison Gopnik: “Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skills, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce…(More)”.
Public Value of Data: B2G data-sharing Within the Data Ecosystem of Helsinki
Paper by Vera Djakonoff: “Datafication penetrates all levels of society. In order to harness public value from an expanding pool of private-produced data, there has been growing interest in facilitating business-to-government (B2G) data-sharing. This research examines the development of B2G data-sharing within the data ecosystem of the City of Helsinki. The research has identified expectations ecosystem actors have for B2G data-sharing and factors that influence the city’s ability to unlock public value from private-produced data.
The research context is smart cities, with a specific focus on the City of Helsinki. Smart cities are in an advantageous position to develop novel public-private collaborations. Helsinki, on the international stage, stands out as a pioneer in the realm of data-driven smart city development. For this research, nine data ecosystem actors representing the city and companies participated in semi-structured thematic interviews through which their perceptions and experiences were mapped.
The theoretical framework of this research draws from the public value management (PVM) approach in examining the smart city data ecosystem and alignment of diverse interests for a shared purpose. Additionally, the research transcends the examination of the interests in isolation and looks at how technological artefacts shape the social context and interests surrounding them. Here, the focus is on the properties of data as an artefact with anti-rival value-generation potential.
The findings of this research reveal that while ecosystem actors recognise that more value can be drawn from data through collaboration, this is not apparent at the level of individual initiatives and transactions. This research shows that the city’s commitment to and facilitation of a long-term shared sense of direction and purpose among ecosystem actors is central to developing B2G data-sharing for public value outcomes. Here, participatory experimentation is key, promoting an understanding of the value of data and rendering visible the diverse motivations and concerns of ecosystem actors, enabling learning for wise, data-driven development…(More)”.