Handbook edited by Nathalie A. Smuha: “…provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI’s impact on society and how it should be regulated…(More)”.
Public participation in policymaking: exploring and understanding impact
Report by the Scottish Government: “This research builds on that framework and seeks to explore how Scottish Government might better understand the impact of public participation on policy decision-making. As detailed above, there is a plethora of potential, and anticipated, benefits which may arise from increased citizen participation in policy decision-making, as well as lots of participatory activity already taking place across the organisation. Now is an opportune time to consider impact, to support the design and delivery of participatory engagements that are impactful and that are more likely to realise the benefits of public participation. Through a review of academic and grey literature along with stakeholder engagement, this study aims to answer the following questions:
- 1. How is impact conceptualised in literature related to public participation, and what are some practice examples?
- 2. How is impact conceptualised by stakeholders and what do they perceive as the current blockers, challenges or facilitators in a Scottish Government setting?
- 3. What evaluation tools or frameworks are used to evaluate the impact of public participation processes, and which ones might be applicable /usable in a Scottish Government setting?…(More)”.
Net zero: the role of consumer behaviour
Horizon Scan by the UK Parliament: “According to research from the Centre for Climate Change and Social Transformation, reaching net zero by 2050 will require individual behaviour change, particularly when it comes to aviation, diet and energy use.
The government’s 2023 Powering Up Britain: Net Zero Growth Plan referred to low carbon choices as ‘green choices’, and described them as public and businesses choosing green products, services, and goods. The plan sets out six principles regarding policies to facilitate green choices. Both the Climate Change Committee and the House of Lords Environment and Climate Change Committee have recommended that government strategies should incorporate greater societal and behavioural change policies and guidance.
Contributors to the horizon scan identified managing consumer behaviour and habits to help achieve net zero as a topic of importance for parliament over the next five years. Change in consumer behaviour could result in approximately 60% of required emission reductions to reach net zero.[5] Behaviour change will be needed from the wealthiest in society, who according to Oxfam typically lead higher-carbon lifestyles.
Incorporating behavioural science principles into policy levers is a well-established method of encouraging desired behaviours. Common examples of policies aiming to influence behaviour include subsidies, regulation and information campaigns (see below).
However, others suggest deliberative public engagement approaches, such as the UK Climate Change Assembly,[7] may be needed to determine which pro-environmental policies are acceptable.[8] Repeated public engagement is seen as key to achieve a just transition as different groups will need different support to enable their green choices (PN 706).
Researchers debate the extent to which individuals should be responsible for making green choices as opposed to the regulatory and physical environment facilitating them, or whether markets, businesses and governments should be the main actors responsible for driving action. They highlight the need for different actions based on the context and the different ways individuals act as consumers, citizens, and within organisations and groups. Health, time, comfort and status can strongly influence individual decisions while finance and regulation are typically stronger motivations for organisations (PN 714)…(More)”
Empowering open data sharing for social good: a privacy-aware approach
Paper by Tânia Carvalho et al: “The Covid-19 pandemic has affected the world at multiple levels. Data sharing was pivotal for advancing research to understand the underlying causes and implement effective containment strategies. In response, many countries have facilitated access to daily cases to support research initiatives, fostering collaboration between organisations and making such data available to the public through open data platforms. Despite the several advantages of data sharing, one of the major concerns before releasing health data is its impact on individuals’ privacy. Such a sharing process should adhere to state-of-the-art methods in Data Protection by Design and by Default. In this paper, we use a Covid-19 data set from Portugal’s second-largest hospital to show how it is feasible to ensure data privacy while improving the quality and maintaining the utility of the data. Our goal is to demonstrate how knowledge exchange in multidisciplinary teams of healthcare practitioners, data privacy, and data science experts is crucial to co-developing strategies that ensure high utility in de-identified data…(More).”
Call to make tech firms report data centre energy use as AI booms
Article by Sandra Laville: “Tech companies should be required by law to report the energy and water consumption for their data centres, as the boom in AI risks causing irreparable damage to the environment, experts have said.
AI is growing at a rate unparalleled by other energy systems, bringing heightened environmental risk, a report by the National Engineering Policy Centre (NEPC) said.
The report calls for the UK government to make tech companies submit mandatory reports on their energy and water consumption and carbon emissions in order to set conditions in which data centres are designed to use fewer vital resources…(More)”.
The new politics of AI
Report by the IPPR: AI is fundamentally different from other technologies – it is set to unleash a vast number of highly sophisticated ‘artificial agents’ into the economy. AI systems that can take actions and make decisions are not just tools – they are actors. This can be a good thing. But it requires a novel type of policymaking and politics. Merely accelerating AI deployment and hoping it will deliver public value will likely be insufficient.
In this briefing, we outline how the summit constitutes the first event of a new era of AI policymaking that links AI policy to delivering public value. We argue that AI needs to be directed towards societies’ goals, via ‘mission-based policies’….(More)”.
Unlocking AI’s potential for the public sector
Article by Ruth Kelly: “…Government needs to work on its digital foundations. The extent of legacy IT systems across government is huge. Many were designed and built for a previous business era, and still rely on paper-based processes. Historic neglect and a lack of asset maintenance has added to the difficulty. Because many systems are not compatible, sharing data across systems requires manual extraction which is risky and costly. All this adds to problems with data quality. Government suffers from data which is incomplete, inconsistent, inaccessible, difficult to process and not easily shareable. A lack of common data models, both between and within government departments, makes it difficult and costly to combine different sources of data, and significant manual effort is required to make data usable. Some departments have told us that they spend 60% to 80% of their time on cleaning data when carrying out analysis.
Why is this an issue for AI? Large volumes of good-quality data are important for training, testing and deploying AI models. Poor data leads to poor outcomes, especially where it involves personal data. Access to good-quality data was identified as a barrier to implementing AI by 62% of the 87 government bodies responding to our survey. Simple productivity improvements that provide integration with routine administration (for example summarising documents) is already possible, but integration with big, established legacy IT is a whole other long-term endeavour. Layering new technology on top of existing systems, and reusing poor-quality and aging data, carries the risk of magnifying problems and further embedding reliance on legacy systems…(More)”
Local Government: Artificial intelligence use cases
Repository by the (UK) Local Government Association: “Building on the findings of our recent AI survey, which highlighted the need for practical examples, this bank showcases the diverse ways local authorities are leveraging AI.
Within this collection, you’ll discover a spectrum of AI adoption, ranging from utilising AI assistants to streamline back-office tasks to pioneering the implementation of bespoke Large Language Models (LLMs). These real-world use cases exemplify the innovative spirit driving advancements in local government service delivery.
Whether your council is at the outset of its AI exploration or seeking to expand its existing capabilities, this bank offers a wealth of valuable insights and best practices to support your organisation’s AI journey…(More)”.
Data Governance Meets the EU AI Act
Article by Axel Schwanke: “..The EU AI Act emphasizes sustainable AI through robust data governance, promoting principles like data minimization, purpose limitation, and data quality to ensure responsible data collection and processing. It mandates measures such as data protection impact assessments and retention policies. Article 10 underscores the importance of effective data management in fostering ethical and sustainable AI development…This article states that high-risk AI systems must be developed using high-quality data sets for training, validation, and testing. These data sets should be managed properly, considering factors like data collection processes, data preparation, potential biases, and data gaps. The data sets should be relevant, representative, error-free, and complete as much as possible. They should also consider the specific context in which the AI system will be used. In some cases, providers may process special categories of personal data to detect and correct biases, but they must follow strict conditions to protect individuals’ rights and freedoms…
However, achieving compliance presents several significant challenges:
- Ensuring Dataset Quality and Relevance: Organizations must establish robust data and AI platforms to prepare and manage datasets that are error-free, representative, and contextually relevant for their intended use cases. This requires rigorous data preparation and validation processes.
- Bias and Contextual Sensitivity: Continuous monitoring for biases in data is critical. Organizations must implement corrective actions to address gaps while ensuring compliance with privacy regulations, especially when processing personal data to detect and reduce bias.
- End-to-End Traceability: A comprehensive data governance framework is essential to track and document data flow from its origin to its final use in AI models. This ensures transparency, accountability, and compliance with regulatory requirements.
- Evolving Data Requirements: Dynamic applications and changing schemas, particularly in industries like real estate, necessitate ongoing updates to data preparation processes to maintain relevance and accuracy.
- Secure Data Processing: Compliance demands strict adherence to secure processing practices for personal data, ensuring privacy and security while enabling bias detection and mitigation.
Example: Real Estate Data
Immowelt’s real estate price map, awarded as the top performer in a 2022 test of real estate price maps, exemplifies the challenges of achieving high-quality datasets. The prepared data powers numerous services and applications, including data analysis, price predictions, personalization, recommendations, and market research…(More)”
Big data for decision-making in public transport management: A comparison of different data sources
Paper by Valeria Maria Urbano, Marika Arena, and Giovanni Azzone: “The conventional data used to support public transport management have inherent constraints related to scalability, cost, and the potential to capture space and time variability. These limitations underscore the importance of exploring innovative data sources to complement more traditional ones.
For public transport operators, who are tasked with making pivotal decisions spanning planning, operation, and performance measurement, innovative data sources are a frontier that is still largely unexplored. To fill this gap, this study first establishes a framework for evaluating innovative data sources, highlighting the specific characteristics that data should have to support decision-making in the context of transportation management. Second, a comparative analysis is conducted, using empirical data collected from primary public transport operators in the Lombardy region, with the aim of understanding whether and to what extent different data sources meet the above requirements.
The findings of this study support transport operators in selecting data sources aligned with different decision-making domains, highlighting related benefits and challenges. This underscores the importance of integrating different data sources to exploit their complementarities…(More)”.