Digitally Kind


Report by Anna Grant with Cliff Manning and Ben Thurman: “Over the past decade and particularly since the outbreak of the COVID-19 pandemic we have seen increasing use of digital technology in service provision by third and public sector organisations. But with this increasing use comes challenges. The development and use of these technologies often outpace the organisational structures put in place to improve delivery and protect both individuals and organisations.

Digitally Kind is devised to help bridge the gaps between digital policy, process and practice to improve outcomes, and introducing kindness as a value to underpin an organisational approach.

Based on workshops with over 40 practitioners and frontline staff, the report has been designed as a starting point to support organisations open up conversations around their use of digital in delivering services. Digitally Kind explores a range of technical, social and cultural considerations around the use of tech when working with individuals covering values and governance; access; safety and wellbeing; knowledge and skills; and participation.

While the project predominantly focused on the experiences of practitioners and organisations working with young people, many of the principles hold true for other sectors. The research also highlights a short set of considerations for funders, policymakers (including regulators) and online platforms….(More)”.

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence


Press Release: “The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustnesssecurity and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation….(More)”.

Knowledge Assets in Government


Draft Guidance by HM Treasury (UK): “Embracing innovation is critical to the future of the UK’s economy, society and its place in the world. However, one of the key findings of HM Treasury’s knowledge assets report published at Budget 2018, was that there was little clear strategic guidance on how to realise value from intangibles or knowledge assets such as intellectual property, research & development, and data, which are pivotal for innovation.

This new draft guidance establishes the concept of managing knowledge assets in government and the public sector. It focuses on how to identify, protect and support their exploitation to help maximise the social, economic and financial value they generate.

The guidance provided in this document is intended to advise and support organisations in scope with their knowledge asset management and, in turn, fulfil their responsibilities as set out in MPM. While the guidance clarifies best practice and provides recommendations, these should not be interpreted as additional rules. The draft guidance recommends that organisations:

  • develop a strategy for managing their knowledge assets, as part of their wider asset management strategy (a requirement of MPM)
  • appoint a Senior Responsible Owner (SRO) for knowledge assets who has clear responsibility for the organisation’s knowledge asset management strategy…(More)“.

The Co-Creation Compass: From Research to Action.


Policy Brief by Jill Dixon et al: ” Modern public administrations face a wider range of challenges than in the past, from designing effective social services that help vulnerable citizens to regulating data sharing between banks and fintech startups to ensure competition and growth to mainstreaming gender policies effectively across the departments of a large public administration.

These very different goals have one thing in common. To be solved, they require collaboration with other entities – citizens, companies and other public administrations and departments. The buy-in of these entities is the factor determining success or failure in achieving the goals. To help resolve this problem, social scientists, researchers and students of public administration have devised several novel tools, some of which draw heavily on the most advanced management thinking of the last decade.

First and foremost is co-creation – an awkward sounding word for a relatively simple idea: the notion that better services can be designed and delivered by listening to users, by creating feedback loops where their success (or failure) can be studied, by frequently innovating and iterating incremental improvements through small-scale experimentation so they can deliver large-scale learnings and by ultimately involving users themselves in designing the way these services can be made most effective and best be delivered.

Co-creation tools and methods provide a structured manner for involving users, thereby maximising the probability of satisfaction, buy-in and adoption. As such, co-creation is not a digital tool; it is a governance tool. There is little doubt that working with citizens in re-designing the online service for school registration will boost the usefulness and effectiveness of the service. And failing to do so will result in yet another digital service struggling to gain adoption….(More)”

In AI We Trust: Power, Illusion and Control of Predictive Algorithms


Book by Helga Nowotny: “One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.

At the heart of our trust in AI lies a paradox: we leverage AI to increase control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.

As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future….(More)”.

How spooks are turning to superforecasting in the Cosmic Bazaar


The Economist: “Every morning for the past year, a group of British civil servants, diplomats, police officers and spies have woken up, logged onto a slick website and offered their best guess as to whether China will invade Taiwan by a particular date. Or whether Arctic sea ice will retrench by a certain amount. Or how far covid-19 infection rates will fall. These imponderables are part of Cosmic Bazaar, a forecasting tournament created by the British government to improve its intelligence analysis.

Since the website was launched in April 2020, more than 10,000 forecasts have been made by 1,300 forecasters, from 41 government departments and several allied countries. The site has around 200 regular forecasters, who must use only publicly available information to tackle the 30-40 questions that are live at any time. Cosmic Bazaar represents the gamification of intelligence. Users are ranked by a single, brutally simple measure: the accuracy of their predictions.

Forecasting tournaments like Cosmic Bazaar draw on a handful of basic ideas. One of them, as seen in this case, is the “wisdom of crowds”, a concept first illustrated by Francis Galton, a statistician, in 1907. Galton observed that in a contest to estimate the weight of an ox at a county fair, the median guess of nearly 800 people was accurate within 1% of the true figure.

Crowdsourcing, as this idea is now called, has been augmented by more recent research into whether and how people make good judgments. Experiments by Philip Tetlock of the University of Pennsylvania, and others, show that experts’ predictions are often no better than chance. Yet some people, dubbed “superforecasters”, often do make accurate predictions, largely because of the way they form judgments—such as having a commitment to revising predictions in light of new data, and being aware of typical human biases. Dr Tetlock’s ideas received publicity last year when Dominic Cummings, then an adviser to Boris Johnson, Britain’s prime minister, endorsed his book and hired a controversial superforecaster to work at Mr Johnson’s office in Downing Street….(More)”.

Mapping Career Causeways


User Guide by Nesta: “This user guide shows how providers of careers information advice and guidance, policymakers and employers can use our innovative data tools to support workers and job seekers as they navigate the labour market.

Nesta’s Mapping Career Causeways project, supported by J.P. Morgan as part of their New Skills at Work initiative, applies state-of-the-art data science methods to create an algorithm that recommends job transitions and retraining to workers, with a focus on supporting those at high risk of automation. The algorithm works by measuring the similarity between over 1,600 jobs, displayed in our interactive ‘map of occupations’, based on the skills and tasks that make up each role.

Following the publication of the Mapping Career Causeways reportdata visualisation and open-source algorithm and codebase, we have developed a short user guide that demonstrates how you can take the insights and learnings from the Mapping Career Causeways project and implement them directly into your work….

The user guide shows how the Mapping Career Causeways research can be used to address common challenges identified by the stakeholders, such as:

  • Navigating the labour market can be overwhelming, and there is a need for a reliable source of insights (e.g. a tool) that helps to broaden a worker’s potential career opportunities whilst providing focused recommendations on the most valuable skills to invest in
  • There is no standardised data or a common ‘skills language’ to support career advice and guidance
  • There is a lack of understanding and clear data about which sectors are most at risk of automation, and which skills are most valuable for workers to invest in, in order to unlock lower-risk jobs
  • Most recruitment and transition practices rely heavily on relevant domain/sector experience and a worker’s contacts (i.e. who you know), and most employers do not take a skills-based approach to hiring
  • Fear, confidence and self esteem are significant barriers for workers to changing careers, in addition to barriers relating to time and finance
  • Localised information on training options, support for job seekers and live job opportunities would further enrich the model
  • Automation is just one of many trends that are changing the make-up and availability of jobs; other considerations such as digitalisation, the green transition, and regional factors must also be considered…(More)”.

Undoing Optimization: Civic Action in Smart Cities


Book by Alison B. Powell: “City life has been reconfigured by our use—and our expectations—of communication, data, and sensing technologies. This book examines the civic use, regulation, and politics of these technologies, looking at how governments, planners, citizens, and activists expect them to enhance life in the city. Alison Powell argues that the de facto forms of citizenship that emerge in relation to these technologies represent sites of contention over how governance and civic power should operate. These become more significant in an increasingly urbanized and polarized world facing new struggles over local participation and engagement. The author moves past the usual discussion of top-down versus bottom-up civic action and instead explains how citizenship shifts in response to technological change and particularly in response to issues related to pervasive sensing, big data, and surveillance in “smart cities.”…(More)”.

Data Access, Consumer Interests and Public Welfare


Book edited by Bundesministerium der Justiz und für Verbraucherschutz, and Max-Planck-Institut für Innovation und Wettbewerb: “Data are considered to be key for the functioning of the data economy as well as for pursuing multiple public interest concerns. Against this backdrop this book strives to device new data access rules for future legislation. To do so, the contributions first explain the justification for such rules from an economic and more general policy perspective. Then, building on the constitutional foundations and existing access regimes, they explore the potential of various fields of the law (competition and contract law, data protection and consumer law, sector-specific regulation) as a basis for the future legal framework. The book also addresses the need to coordinate data access rules with intellectual property rights and to integrate these rules as one of multiple measures in larger data governance systems. Finally, the book discusses the enforcement of the Government’s interest in using privately held data as well as potential data access rights of the users of connected devices….(More)”.

Democratic institutions and prosperity: The benefits of an open society


Paper by the European Parliamentary Research Service: “The ongoing structural transformation and the rapid spread of the technologies of the fourth industrial revolution are challenging current democratic institutions and their established forms of governance and regulation.At the same time, these changes offer vast opportunities to enhance, strengthen and expand the existing democratic framework to reflect a more complex and interdependent world. This process has already begun in many democratic societies but further progress is needed.
Examining these issues involves looking at the impact of ongoing complex and simultaneous changes on the theoretical framework underpinning beneficial democratic regulation. More specifically, combining economic, legal and political perspectives, it is necessary to explore how some adaptations to existing democratic institutions could further improve the functioning of democracies while also delivering additional economic benefits to citizens and society as whole. The introduction of a series of promising new tools could offer a potential way to support democratic decision-makers in regulating complexity and tackling ongoing and future challenges. The first of these tools is to use strategic foresight to anticipate and control future events; the second is collective intelligence, following the idea that citizens are collectively capable of providing better solutions to regulatory problems than are public administrations; the third and fourth are concerned with design-thinking and algorithmic regulation respectively. Design-based approaches are credited with opening up innovative options for policy-makers, while algorithms hold the promise of enabling decision-making to handle complex issues while remaining participatory….(More)”.