Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century


Lynne Parker at the AI.gov website: “Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century.  Nearly every sector of the economy and society has been affected by the capabilities and potential of AI.  AI is enabling farmers to grow food more efficiently, medical researchers to better understand and treat COVID-19, scientists to develop new materials, transportation professionals to deliver more goods faster and with less energy, weather forecasters to more accurately predict the tracks of hurricanes, and national security protectors to better defend our Nation.

At the same time, AI has raised important societal concerns.  What is the impact of AI on the changing nature of work?  How can we ensure that AI is used appropriately, and does not result in unfair discrimination or bias?  How can we guard against uses of AI that infringe upon human rights and democratic principles?

These dual perspectives on AI have led to the concept of “trustworthy AI”.  Trustworthy AI is AI that is designed, developed, and used in a manner that is lawful, fair, unbiased, accurate, reliable, effective, safe, secure, resilient, understandable, and with processes in place to regularly monitor and evaluate the AI system’s performance and outcomes.

Achieving trustworthy AI requires an all-of-government and all-of-Nation approach, combining the efforts of industry, academia, government, and civil society.  The Federal government is doing its part through a national strategy, called the National AI Initiative Act of 2020 (NAIIA).  The National AI Initiative (NAII) builds upon several years of impactful AI policy actions, many of which were outcomes from EO 13859 on Maintaining American Leadership in AI.

Six key pillars define the Nation’s AI strategy:

  • prioritizing AI research and development;
  • strengthening AI research infrastructure;
  • advancing trustworthy AI through technical standards and governance;
  • training an AI-ready workforce;
  • promoting international AI engagement; and
  • leveraging trustworthy AI for government and national security.

Coordinating all of these efforts is the National AI Initiative Office, which is legislated by the NAIIA to coordinate and support the NAII.  This Office serves as the central point of contact for exchanging technical and programmatic information on AI activities at Federal departments and agencies, as well as related Initiative activities in industry, academia, nonprofit organizations, professional societies, State and tribal governments, and others.

The AI.gov website provides a portal for exploring in more depth the many AI actions, initiatives, strategies, programs, reports, and related efforts across the Federal government.  It serves as a resource for those who want to learn more about how to take full advantage of the opportunities of AI, and to learn how the Federal government is advancing the design, development, and use of trustworthy AI….(More)”

Open Hardware: An Opportunity to Build Better Science


Report by Alison Parker et al: “Today’s research infrastructure, including scientific hardware, is unevenly distributed in the scientific community, severely limiting collaboration, customization, and impact. Open hardware for science provides an alternative approach to reliance on expensive and proprietary instrumentation while giving “people the freedom to control their technology while sharing knowledge and encouraging commerce through the open exchange of design.”

Open hardware can be modified and recombined to build diverse libraries of tools that serve as a freely available resource for use across several disciplines. By improving access to tools, open hardware for science encourages collaboration, accelerates innovation, and improves scientific reproducibility and repeatability. Open hardware for science is often less expensive than proprietary equivalents, allowing research laboratories to stretch funding further. Beyond scientific research, open hardware has proven to benefit and impact a number of complementary policy priorities including: broadening public participation in science, accessible experiential STEM education, crisis response, and improving distributed manufacturing capabilities.

Because of recent, bipartisan progress in open science, the U.S. government is well positioned to elevate and enhance the impact of open hardware in American science. By addressing key implementation challenges and prioritizing open hardware for science, we as a nation can build better infrastructure for future science, cement U.S. scientific leadership and innovation, and help the U.S. prepare for future crises. This report addresses the need to build a stronger foundation for science by prioritizing open hardware, describes the unique benefits of open hardware alongside complementary policy priorities, and briefly lays out implementation challenges to overcome. …(More)”.

Resetting Data Governance: Authorized Public Purpose Access and Society Criteria for Implementation of APPA Principles


Paper by the WEF Japan: “In January 2020, our first publication presented Authorized Public Purpose Access (APPA), a new data governance model that aims to strike a balance between individual rights and the interests of data holders and the public interest. It is proposed that the use of personal data for public-health purposes, including fighting pandemics, be subject to appropriate and balanced governance mechanisms such as those set out the APPA approach. The same approach could be extended to the use of data for non-medical public-interest purposes, such as achieving the United Nations Sustainable Development Goals (SDGs). This publication proposes a systematic approach to implementing APPA and to pursuing public-interest goals through data use. The approach values practicality, broad social agreement on appropriate goals and methods, and the valid interests of all stakeholders….(More)”.

Bridging the data-policy gap in Africa


Report by PARIS21 and the Mo Ibrahim Foundation (MIF): “National statistics are an essential component of policymaking: they provide the evidence required to design policies that address the needs of citizens, to monitor results and hold governments to account. Data and policy are closely linked. As Mo Ibrahim puts it: “without data, governments drive blind”. However, there is evidence that the capacity of African governments for data-driven policymaking remains limited by a wide data-policy gap.

What is the data-policy gap?
On the data side, statistical capacity across the continent has improved in recent decades. However, it remains low compared to other world regions and is hindered by several challenges. African national statistical offices (NSOs) often lack adequate financial and human resources as well as the capacity to provide accessible and available data. On the policy side, data literacy as well as a culture of placing data first in policy design and monitoring are still not widespread. Thus, investing in the basic building blocks of national statistics, such as civil registration, is often not a key priority.

At the same time, international development frameworks, such as the United Nations 2030 Agenda for Sustainable Development and the African Union Agenda 2063, require that every signatory country produce and use high-quality, timely and disaggregated data in order to shape development policies that leave no one behind and to fulfil reporting commitments.

Also, the new data ecosystem linked to digital technologies is providing an explosion of data sourced from non-state providers. Within this changing data landscape, African NSOs, like those in many other parts of the world, are confronted with a new data stewardship role. This will add further pressure on the capacity of NSOs, and presents additional challenges in terms of navigating issues of governance and use…

Recommendations as part of a six-point roadmap for bridging the data-policy map include:

  1. Creating a statistical capacity strategy to raise funds
  2. Connecting to knowledge banks to hire and retain talent
  3. Building good narratives for better data use
  4. Recognising the power of foundational data
  5. Strengthening statistical laws to harness the data revolution
  6. Encouraging data use in policy design and implementation…(More)”

Digitally Kind


Report by Anna Grant with Cliff Manning and Ben Thurman: “Over the past decade and particularly since the outbreak of the COVID-19 pandemic we have seen increasing use of digital technology in service provision by third and public sector organisations. But with this increasing use comes challenges. The development and use of these technologies often outpace the organisational structures put in place to improve delivery and protect both individuals and organisations.

Digitally Kind is devised to help bridge the gaps between digital policy, process and practice to improve outcomes, and introducing kindness as a value to underpin an organisational approach.

Based on workshops with over 40 practitioners and frontline staff, the report has been designed as a starting point to support organisations open up conversations around their use of digital in delivering services. Digitally Kind explores a range of technical, social and cultural considerations around the use of tech when working with individuals covering values and governance; access; safety and wellbeing; knowledge and skills; and participation.

While the project predominantly focused on the experiences of practitioners and organisations working with young people, many of the principles hold true for other sectors. The research also highlights a short set of considerations for funders, policymakers (including regulators) and online platforms….(More)”.

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence


Press Release: “The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustnesssecurity and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation….(More)”.

Re-Thinking Think Tanks: Differentiating Knowledge-Based Policy Influence Organizations


Paper by Adam Wellstead and Michael P. Howlett: “The idea of “think tanks” is one of the oldest in the policy sciences. While the topic has been studied for decades, however, recent work dealing with advocacy groups, policy and Behavioural Insight labs, and into the activities of think tanks themselves have led to discontent with the definitions used in the field, and especially with the way the term may obfuscate rather than clarify important distinctions between different kinds of knowledge-based policy influence organizations (KBPIO). In this paper, we examine the traditional and current definitions of think tanks utilized in the discipline and point out their weaknesses. We then develop a new framework to better capture the variation in such organizations which operate in many sectors….(More)”.

Knowledge Assets in Government


Draft Guidance by HM Treasury (UK): “Embracing innovation is critical to the future of the UK’s economy, society and its place in the world. However, one of the key findings of HM Treasury’s knowledge assets report published at Budget 2018, was that there was little clear strategic guidance on how to realise value from intangibles or knowledge assets such as intellectual property, research & development, and data, which are pivotal for innovation.

This new draft guidance establishes the concept of managing knowledge assets in government and the public sector. It focuses on how to identify, protect and support their exploitation to help maximise the social, economic and financial value they generate.

The guidance provided in this document is intended to advise and support organisations in scope with their knowledge asset management and, in turn, fulfil their responsibilities as set out in MPM. While the guidance clarifies best practice and provides recommendations, these should not be interpreted as additional rules. The draft guidance recommends that organisations:

  • develop a strategy for managing their knowledge assets, as part of their wider asset management strategy (a requirement of MPM)
  • appoint a Senior Responsible Owner (SRO) for knowledge assets who has clear responsibility for the organisation’s knowledge asset management strategy…(More)“.

The Co-Creation Compass: From Research to Action.


Policy Brief by Jill Dixon et al: ” Modern public administrations face a wider range of challenges than in the past, from designing effective social services that help vulnerable citizens to regulating data sharing between banks and fintech startups to ensure competition and growth to mainstreaming gender policies effectively across the departments of a large public administration.

These very different goals have one thing in common. To be solved, they require collaboration with other entities – citizens, companies and other public administrations and departments. The buy-in of these entities is the factor determining success or failure in achieving the goals. To help resolve this problem, social scientists, researchers and students of public administration have devised several novel tools, some of which draw heavily on the most advanced management thinking of the last decade.

First and foremost is co-creation – an awkward sounding word for a relatively simple idea: the notion that better services can be designed and delivered by listening to users, by creating feedback loops where their success (or failure) can be studied, by frequently innovating and iterating incremental improvements through small-scale experimentation so they can deliver large-scale learnings and by ultimately involving users themselves in designing the way these services can be made most effective and best be delivered.

Co-creation tools and methods provide a structured manner for involving users, thereby maximising the probability of satisfaction, buy-in and adoption. As such, co-creation is not a digital tool; it is a governance tool. There is little doubt that working with citizens in re-designing the online service for school registration will boost the usefulness and effectiveness of the service. And failing to do so will result in yet another digital service struggling to gain adoption….(More)”

Towards intellectual freedom in an AI Ethics Global Community


Paper by Christoph Ebell et al: “The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers’ work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all….(More)”.