Urban Development and the State of Open Data


Chapter by Stefaan G. Verhulst and Sampriti Saxena: “Nearly 4.4 billion people, or about 55% of the world’s population, lived in cities in 2018. By 2045, this number is anticipated to grow to 6 billion. Such level of growth requires innovative and targeted urban solutions. By more effectively leveraging open data, cities can meet the needs of an ever-growing population in an effective and sustainable manner. This paper updates the previous contribution by Jean-Noé Landry, titled “Open Data and Urban Development” in the 2019 edition of The State of Open Data. It also aims to contribute to a further deepening of the Third Wave of Open Data, which highlights the significance of open data at the subnational level as a more direct and immediate response to the on-the-ground needs of citizens. It considers recent developments in how the use of, and approach to, open data has evolved within an urban development context. It seeks to discuss emerging applications of open data in cities, recent developments in open data infrastructure, governance and policies related to open data, and the future outlook of the role of open data in urbanization…(More)”.

Governing Urban Data for the Public Interest


Report by The New Hanse: “…This report represents the culmination of our efforts and offers actionable guidelines for European cities seeking to harness the power of data for the public good.

The key recommendations outlined in the report are:

1. Shift the Paradigm towards Democratic Control of Data: Advocate for a policy that defaults to making urban data accessible, requiring private data holders to share in the public interest.

2. Provide Legal Clarity in a Dynamic Environment: Address legal uncertainties by balancing privacy and confidentiality needs with the public interest in data accessibility, working collaboratively with relevant authorities at national and EU level.

3. Build a Data Commons Repository of Use cases: Streamline data sharing efforts by establishing a standardised use case repository with common technical frameworks, procedures, and contracts.

4. Set up an Urban Data Intermediary for the Public Interest: Institutionalise data sharing, by building urban data intermediaries to address complexities, following principles of public purpose, transparency, and accountability.

5. Learning from the Hamburg Experiment and Scale it across Europe: Embrace experimentation as a vital step, even if outcomes are uncertain, to adapt processes for future innovations. Experiments at the local level can inform policy and scale nationally and across Europe…(More)”.

AI Adoption in America: Who, What, and Where


Paper by Kristina McElheran: “…We study the early adoption and diffusion of five AI-related technologies (automated-guided vehicles, machine learning, machine vision, natural language processing, and voice recognition) as documented in the 2018 Annual Business Survey of 850,000 firms across the United States. We find that fewer than 6% of firms used any of the AI-related technologies we measure, though most very large firms reported at least some AI use. Weighted by employment, average adoption was just over 18%. AI use in production, while varying considerably by industry, nevertheless was found in every sector of the economy and clustered with emerging technologies such as cloud computing and robotics. Among dynamic young firms, AI use was highest alongside more-educated, more-experienced, and younger owners, including owners motivated by bringing new ideas to market or helping the community. AI adoption was also more common alongside indicators of high-growth entrepreneurship, including venture capital funding, recent product and process innovation, and growth-oriented business strategies. Early adoption was far from evenly distributed: a handful of “superstar” cities and emerging hubs led startups’ adoption of AI. These patterns of early AI use foreshadow economic and social impacts far beyond this limited initial diffusion, with the possibility of a growing “AI divide” if early patterns persist…(More)”.

How language gaps constrain generative AI development


Article by Regina Ta and Nicol Turner Lee: “Prompt-based generative artificial intelligence (AI) tools are quickly being deployed for a range of use cases, from writing emails and compiling legal cases to personalizing research essays in a wide range of educational, professional, and vocational disciplines. But language is not monolithic, and opportunities may be missed in developing generative AI tools for non-standard languages and dialects. Current applications often are not optimized for certain populations or communities and, in some instances, may exacerbate social and economic divisions. As noted by the Austrian linguist and philosopher Ludwig Wittgenstein, “The limits of my language mean the limits of my world.” This is especially true today, when the language we speak can change how we engage with technology, and the limits of our online vernacular can constrain the full and fair use of existing and emerging technologies.

As it stands now, the majority of the world’s speakers are being left behind if they are not part of one of the world’s dominant languages, such as English, French, German, Spanish, Chinese, or Russian. There are over 7,000 languages spoken worldwide, yet a plurality of content on the internet is written in English, with the largest remaining online shares claimed by Asian and European languages like Mandarin or Spanish. Moreover, in the English language alone, there are over 150 dialects beyond “standard” U.S. English. Consequently, large language models (LLMs) that train AI tools, like generative AI, rely on binary internet data that serve to increase the gap between standard and non-standard speakers, widening the digital language divide.

Among sociologists, anthropologists, and linguists, language is a source of power and one that significantly influences the development and dissemination of new tools that are dependent upon learned, linguistic capabilities. Depending on where one sits within socio-ethnic contexts, native language can internally strengthen communities while also amplifying and replicating inequalities when coopted by incumbent power structures to restrict immigrant and historically marginalized communities. For example, during the transatlantic slave trade, literacy was a weapon used by white supremacists to reinforce the dependence of Blacks on slave masters, which resulted in many anti-literacy laws being passed in the 1800s in most Confederate states…(More)”.

Data collaboration to enable the EU Green Deal


Article by Justine Gangneux: “In the fight against climate change, local authorities are increasingly turning to cross-sectoral data sharing as a game-changing strategy.

This collaborative approach empowers cities and communities to harness a wealth of data from diverse sources, enabling them to pinpoint emission hotspots, tailor policies for maximum impact, and allocate resources wisely.

Data can also strengthen climate resilience by engaging local communities and facilitating real-time progress tracking…

In recent years, more and more local data initiatives aimed at tackling climate change have emerged, spanning from urban planning to mobility, adaptation and energy management.

Such is the case of Porto’s CityCatalyst – the project put five demonstrators in place to showcase smart cities infrastructure and develop data standards and models, contributing to the efficient and integrated management of urban flows…

In Latvia, Riga is also exploring data solutions such as visualisations, aggregation or analytics, as part of the Positive Energy District strategy.  Driven by the national Energy Efficiency Law, the city is developing a project to monitor energy consumption based on building utility use data (heat, electricity, gas, or water), customer and billing data, and Internet of Things smart metre data from individual buildings…

As these examples show, it is not just public data that holds the key; private sector data, from utilities as energy or water, to telecoms, offers cities valuable insights in their efforts to tackle climate change…(More)”.

The Future of AI Is GOMA


Article by Matteo Wong: “A slate of four AI companies might soon rule Silicon Valley…Chatbots and their ilk are still in their early stages, but everything in the world of AI is already converging around just four companies. You could refer to them by the acronym GOMA: Google, OpenAI, Microsoft, and Anthropic. Shortly after OpenAI released ChatGPT last year, Microsoft poured $10 billion into the start-up and shoved OpenAI-based chatbots into its search engine, Bing. Not to be outdone, Google announced that more AI features were coming to SearchMaps, Docs, and more, and introduced Bard, its own rival chatbot. Microsoft and Google are now in a race to integrate generative AI into just about everything. Meanwhile, Anthropic, a start-up launched by former OpenAI employees, has raised billions of dollars in its own right, including from Google. Companies such as Slack, Expedia, Khan Academy, Salesforce, and Bain are integrating ChatGPT into their products; many others are using Anthropic’s chatbot, Claude. Executives from GOMA have also met with leaders and officials around the world to shape the future of AI’s deployment and regulation. The four have overlapping but separate proposals for AI safety and regulation, but they have joined together to create the Frontier Model Forum, a consortium whose stated mission is to protect against the supposed world-ending dangers posed by terrifyingly capable models that do not yet exist but, it warns, are right around the corner. That existential language—about bioweapons and nuclear robots—has since migrated its way into all sorts of government proposals and language. If AI is truly reshaping the world, these companies are the sculptors…”…(More)”.

Policy brief: Generative AI


Policy Brief by Ann Kristin Glenster, and Sam Gilbert: “The rapid rollout of generative AI models, and public attention to Open AI’s ChatGPT, has raised concerns about AI’s impact on the economy and society. In the UK, policy-makers are looking to large language models and other so-called foundation models as ways to potentially improve economic productivity.

This policy brief outlines which policy levers could support those goals. The authors argue that the UK should pursue becoming a global leader in applying generative AI to the economy. Rather than use public support for building new foundation models, the UK could support the growing ecosystem of startups that develop new applications for these models, creating new products and services.

This policy brief answers three key questions:

  1. What policy infrastructure and social capacity does the UK need to lead and manage deployment of responsible generative AI (over the long term)?
  2. What national capability does the UK need for large-scale AI systems in the short- and medium-term?
  3. What governance capacity does the UK need to deal with fast moving technologies, in which large uncertainties are a feature, not a bug?…(More)”.

Towards an Inclusive Data Governance Policy for the Use of Artificial Intelligence in Africa


Paper by Jake Okechukwu Effoduh, Ugochukwu Ejike Akpudo and Jude Dzevela Kong: “This paper proposes five ideas that the design of data governance policies for the inclusive use of artificial intelligence (AI) in Africa should consider. The first is for African states to carry out an assessment of their domestic strategic priorities, strengths, and weaknesses. The second is a human-centric approach to data governance which involves data processing practices that protect security of personal data and privacy of data subjects; ensures that personal data is processed in a fair, lawful, and accountable manner; minimize the harmful effect of personal data misuse or abuse on data subjects and other victims; and promote a beneficial, trusted use of personal data. The third is for the data policy to be in alignment with supranational rights-respecting AI standards like the African Charter on Human and Peoples Rights, the AU Convention on Cybersecurity and Personal Data Protection. The fourth is for states to be critical about the extent that AI systems can be relied on in certain public sectors or departments. The fifth and final proposition is for the need to prioritize the use of representative and interoperable data and ensuring a transparent procurement process for AI systems from abroad where no local options exist…(More)”

Setting Democratic Ground Rules for AI: Civil Society Strategies


Report by Beth Kerley: “…analyzes priorities, challenges, and promising civil society strategies for advancing democratic approaches to governing artificial intelligence (AI). The report is based on conversations from a private Forum workshop in Buenos Aires, Argentina that brought together Latin American and global researchers and civil society practitioners.

With recent leaps in the development of AI, we are experiencing a seismic shift in the balance of power between people and governments, posing new challenges to democratic principles such as privacy, transparency, and non-discrimination. We know that AI will shape the political world we inhabit–but how can we ensure that democratic norms and institutions shape the trajectory of AI?

Drawing on global civil society perspectives, this report surveys what stakeholders need to know about AI systems and the human relationships behind them. It delves into the obstacles– from misleading narratives to government opacity to gaps in technical expertise–that hinder democratic engagement on AI governance, and explores how new thinking, new institutions, and new collaborations can better equip societies to set democratic ground rules for AI technologies…(More)”.

Addressing ethical gaps in ‘Technology for Good’: Foregrounding care and capabilities


Paper by Alison B. Powell et al: “This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts. Its main contribution is to model an integrative approach using multiple ethical frameworks to analyse and understand the everyday nature of ethical practice, including in professional practice among ‘technology for good’ start-ups. The paper identifies inherent paradoxes in the ‘technology for good’ sector as well as ethical gaps related to (1) the sometimes-misplaced assignment of virtuousness to an individual; (2) difficulties in understanding social constraints on ethical action; and (3) the often unaccounted for mismatch between ethical intentions and outcomes in everyday practice, including in professional work associated with an ‘ethical turn’ in technology. These gaps persist even in contexts where ethics are foregrounded as matters of concern. To address the gaps, the paper suggests systemic, rather than individualized, considerations of care and capability applied to innovation settings, in combination with considerations of virtue and consequence. This paper advocates for addressing these challenges holistically in order to generate renewed capacity for change at a systemic level…(More)”.