Data collaboration to enable the EU Green Deal


Article by Justine Gangneux: “In the fight against climate change, local authorities are increasingly turning to cross-sectoral data sharing as a game-changing strategy.

This collaborative approach empowers cities and communities to harness a wealth of data from diverse sources, enabling them to pinpoint emission hotspots, tailor policies for maximum impact, and allocate resources wisely.

Data can also strengthen climate resilience by engaging local communities and facilitating real-time progress tracking…

In recent years, more and more local data initiatives aimed at tackling climate change have emerged, spanning from urban planning to mobility, adaptation and energy management.

Such is the case of Porto’s CityCatalyst – the project put five demonstrators in place to showcase smart cities infrastructure and develop data standards and models, contributing to the efficient and integrated management of urban flows…

In Latvia, Riga is also exploring data solutions such as visualisations, aggregation or analytics, as part of the Positive Energy District strategy.  Driven by the national Energy Efficiency Law, the city is developing a project to monitor energy consumption based on building utility use data (heat, electricity, gas, or water), customer and billing data, and Internet of Things smart metre data from individual buildings…

As these examples show, it is not just public data that holds the key; private sector data, from utilities as energy or water, to telecoms, offers cities valuable insights in their efforts to tackle climate change…(More)”.

The Future of AI Is GOMA


Article by Matteo Wong: “A slate of four AI companies might soon rule Silicon Valley…Chatbots and their ilk are still in their early stages, but everything in the world of AI is already converging around just four companies. You could refer to them by the acronym GOMA: Google, OpenAI, Microsoft, and Anthropic. Shortly after OpenAI released ChatGPT last year, Microsoft poured $10 billion into the start-up and shoved OpenAI-based chatbots into its search engine, Bing. Not to be outdone, Google announced that more AI features were coming to SearchMaps, Docs, and more, and introduced Bard, its own rival chatbot. Microsoft and Google are now in a race to integrate generative AI into just about everything. Meanwhile, Anthropic, a start-up launched by former OpenAI employees, has raised billions of dollars in its own right, including from Google. Companies such as Slack, Expedia, Khan Academy, Salesforce, and Bain are integrating ChatGPT into their products; many others are using Anthropic’s chatbot, Claude. Executives from GOMA have also met with leaders and officials around the world to shape the future of AI’s deployment and regulation. The four have overlapping but separate proposals for AI safety and regulation, but they have joined together to create the Frontier Model Forum, a consortium whose stated mission is to protect against the supposed world-ending dangers posed by terrifyingly capable models that do not yet exist but, it warns, are right around the corner. That existential language—about bioweapons and nuclear robots—has since migrated its way into all sorts of government proposals and language. If AI is truly reshaping the world, these companies are the sculptors…”…(More)”.

Policy brief: Generative AI


Policy Brief by Ann Kristin Glenster, and Sam Gilbert: “The rapid rollout of generative AI models, and public attention to Open AI’s ChatGPT, has raised concerns about AI’s impact on the economy and society. In the UK, policy-makers are looking to large language models and other so-called foundation models as ways to potentially improve economic productivity.

This policy brief outlines which policy levers could support those goals. The authors argue that the UK should pursue becoming a global leader in applying generative AI to the economy. Rather than use public support for building new foundation models, the UK could support the growing ecosystem of startups that develop new applications for these models, creating new products and services.

This policy brief answers three key questions:

  1. What policy infrastructure and social capacity does the UK need to lead and manage deployment of responsible generative AI (over the long term)?
  2. What national capability does the UK need for large-scale AI systems in the short- and medium-term?
  3. What governance capacity does the UK need to deal with fast moving technologies, in which large uncertainties are a feature, not a bug?…(More)”.

Towards an Inclusive Data Governance Policy for the Use of Artificial Intelligence in Africa


Paper by Jake Okechukwu Effoduh, Ugochukwu Ejike Akpudo and Jude Dzevela Kong: “This paper proposes five ideas that the design of data governance policies for the inclusive use of artificial intelligence (AI) in Africa should consider. The first is for African states to carry out an assessment of their domestic strategic priorities, strengths, and weaknesses. The second is a human-centric approach to data governance which involves data processing practices that protect security of personal data and privacy of data subjects; ensures that personal data is processed in a fair, lawful, and accountable manner; minimize the harmful effect of personal data misuse or abuse on data subjects and other victims; and promote a beneficial, trusted use of personal data. The third is for the data policy to be in alignment with supranational rights-respecting AI standards like the African Charter on Human and Peoples Rights, the AU Convention on Cybersecurity and Personal Data Protection. The fourth is for states to be critical about the extent that AI systems can be relied on in certain public sectors or departments. The fifth and final proposition is for the need to prioritize the use of representative and interoperable data and ensuring a transparent procurement process for AI systems from abroad where no local options exist…(More)”

Setting Democratic Ground Rules for AI: Civil Society Strategies


Report by Beth Kerley: “…analyzes priorities, challenges, and promising civil society strategies for advancing democratic approaches to governing artificial intelligence (AI). The report is based on conversations from a private Forum workshop in Buenos Aires, Argentina that brought together Latin American and global researchers and civil society practitioners.

With recent leaps in the development of AI, we are experiencing a seismic shift in the balance of power between people and governments, posing new challenges to democratic principles such as privacy, transparency, and non-discrimination. We know that AI will shape the political world we inhabit–but how can we ensure that democratic norms and institutions shape the trajectory of AI?

Drawing on global civil society perspectives, this report surveys what stakeholders need to know about AI systems and the human relationships behind them. It delves into the obstacles– from misleading narratives to government opacity to gaps in technical expertise–that hinder democratic engagement on AI governance, and explores how new thinking, new institutions, and new collaborations can better equip societies to set democratic ground rules for AI technologies…(More)”.

Addressing ethical gaps in ‘Technology for Good’: Foregrounding care and capabilities


Paper by Alison B. Powell et al: “This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts. Its main contribution is to model an integrative approach using multiple ethical frameworks to analyse and understand the everyday nature of ethical practice, including in professional practice among ‘technology for good’ start-ups. The paper identifies inherent paradoxes in the ‘technology for good’ sector as well as ethical gaps related to (1) the sometimes-misplaced assignment of virtuousness to an individual; (2) difficulties in understanding social constraints on ethical action; and (3) the often unaccounted for mismatch between ethical intentions and outcomes in everyday practice, including in professional work associated with an ‘ethical turn’ in technology. These gaps persist even in contexts where ethics are foregrounded as matters of concern. To address the gaps, the paper suggests systemic, rather than individualized, considerations of care and capability applied to innovation settings, in combination with considerations of virtue and consequence. This paper advocates for addressing these challenges holistically in order to generate renewed capacity for change at a systemic level…(More)”.

Predictive Policing Software Terrible At Predicting Crimes


Article by Aaron Sankin and Surya Mattu: “A software company sold a New Jersey police department an algorithm that was right less than 1% of the time

Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.

Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.

We examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.

Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.

“Why did we get PredPol? I guess we wanted to be more effective when it came to reducing crime. And having a prediction where we should be would help us to do that. I don’t know that it did that,” said Captain David Guarino of the Plainfield PD. “I don’t believe we really used it that often, if at all. That’s why we ended up getting rid of it.”…(More)’.

Our Planet Powered by AI: How We Use Artificial Intelligence to Create a Sustainable Future for Humanity


Book by Mark Minevich: “…You’ll learn to create sustainable, effective competitive advantage by introducing previously unheard-of levels of adaptability, resilience, and innovation into your company.

Using real-world case studies from a variety of well-known industry leaders, the author explains the strategic archetypes, technological infrastructures, and cultures of sustainability you’ll need to ensure your firm’s next-level digital transformation takes root. You’ll also discover:

  • How AI can enable new business strategies, models, and ecosystems of innovation and growth
  • How to develop societal impact and powerful organizational benefits with ethical AI implementations that incorporate transparency, fairness, privacy, and reliability
  • What it means to enable all-inclusive artificial intelligence

An engaging and hands-on exploration of how to take your firm to new levels of dynamism and growth, Our Planet Powered by AI will earn a place in the libraries of managers, executives, directors, and other business and technology leaders seeking to distinguish their companies in a new age of astonishing technological advancement and fierce competition….(More)”.

Facilitating Data Flows through Data Collaboratives


A Practical Guide “to Designing Valuable, Accessible, and Responsible Data Collaboratives” by Uma Kalkar, Natalia González Alarcón, Arturo Muente Kunigami and Stefaan Verhulst: “Data is an indispensable asset in today’s society, but its production and sharing are subject to well-known market failures. Among these: neither economic nor academic markets efficiently reward costly data collection and quality assurance efforts; data providers cannot easily supervise the appropriate use of their data; and, correspondingly, users have weak incentives to pay for, acknowledge, and protect data that they receive from providers. Data collaboratives are a potential non-market solution to this problem, bringing together data providers and users to address these market failures. The governance frameworks for these collaboratives are varied and complex and their details are not widely known. This guide proposes a methodology and a set of common elements that facilitate experimentation and creation of collaborative environments. It offers guidance to governments on implementing effective data collaboratives as a means to promote data flows in Latin America and the Caribbean, harnessing their potential to design more effective services and improve public policies…(More)”.

Artificial Intelligence and the Labor Force


Report by by Tobias Sytsma, and Éder M. Sousa: “The rapid development of artificial intelligence (AI) has the potential to revolutionize the labor force with new generative AI tools that are projected to contribute trillions of dollars to the global economy by 2040. However, this opportunity comes with concerns about the impact of AI on workers and labor markets. As AI technology continues to evolve, there is a growing need for research to understand the technology’s implications for workers, firms, and markets. This report addresses this pressing need by exploring the relationship between occupational exposure and AI-related technologies, wages, and employment.

Using natural language processing (NLP) to identify semantic similarities between job task descriptions and U.S. technology patents awarded between 1976 and 2020, the authors evaluate occupational exposure to all technology patents in the United States, as well as to specific AI technologies, including machine learning, NLP, speech recognition, planning control, AI hardware, computer vision, and evolutionary computation.

The authors’ findings suggest that exposure to both general technology and AI technology patents is not uniform across occupational groups, over time, or across technology categories. They estimate that up to 15 percent of U.S. workers were highly exposed to AI technology patents by 2019 and find that the correlation between technology exposure and employment growth can depend on the routineness of the occupation. This report contributes to the growing literature on the labor market implications of AI and provides insights that can inform policy discussions around this emerging issue…(More)”