National strategies on Artificial Intelligence: A European perspective

Report by European Commission’s Joint Research Centre (JRC) and the OECD’s Science Technology and Innovation Directorate: “Artificial intelligence (AI) is transforming the world in many aspects. It is essential for Europe to consider how to make the most of the opportunities from this transformation and to address its challenges. In 2018 the European Commission adopted the Coordinated Plan on Artificial Intelligence that was developed together with the Member States to maximise the impact of investments at European Union (EU) and national levels, and to encourage synergies and cooperation across the EU.

One of the key actions towards these aims was an encouragement for the Member States to develop their national AI strategies.The review of national strategies is one of the tasks of AI Watch launched by the European Commission to support the implementation of the Coordinated Plan on Artificial Intelligence.

Building on the 2020 AI Watch review of national strategies, this report presents an updated review of national AI strategies from the EU Member States, Norway and Switzerland. By June 2021, 20 Member States and Norway had published national AI strategies, while 7 Member States were in the final drafting phase. Since the 2020 release of the AI Watch report, additional Member States – i.e. Bulgaria, Hungary, Poland, Slovenia, and Spain – published strategies, while Cyprus, Finland and Germany have revised the initial strategies.

This report provides an overview of national AI policies according to the following policy areas: Human capital, From the lab to the market, Networking, Regulation, and Infrastructure. These policy areas are consistent with the actions proposed in the Coordinated Plan on Artificial Intelligence and with the policy recommendations to governments contained in the OECD Recommendation on AI. The report also includes a section on AI policies to address societal challenges of the COVID-19 pandemic and climate change….(More)”.

Linux Foundation unveils new permissive license for open data collaboration

VentureBeat: “The Linux Foundation has announced a new permissive license designed to help foster collaboration around open data for artificial intelligence (AI) and machine learning (ML) projects.

Data may be the new oil, but for AI and ML projects, having access to expansive and diverse datasets is key to reducing bias and building powerful models capable of all manner of intelligent tasks. For machines, data is a little like “experience” is for humans — the more of it you have, the better decisions you are likely to make.

With CDLA-Permissive-2.0, the Linux Foundation is building on its previous efforts to encourage data-sharing through licensing arrangements that clearly define how the data — and any derivative datasets — can and can’t be used.

The Linux Foundation introduced the Community Data License Agreement (CDLA) in 2017 to entice organizations to open up their vast pools of (underused) data to third parties. There were two original licenses, a sharing license with a “copyleft” reciprocal commitment borrowed from the open source software sphere, stipulating that any derivative datasets built from the original dataset must be shared under a similar license, and a permissive license (1.0) without any such obligations in place (much as “true” open source software might be defined).

Licenses are basically legal documents that outline how a piece of work (in this case datasets) can be used or modified, but specific phrases, ambiguities, or exceptions can often be enough to spook companies if they think releasing content under a specific license could cause them problems down the line. This is where the CDLA-Permissive-2.0 license comes into play — it’s essentially a rewrite of version 1.0 but shorter and simpler to follow. Going further, it has removed certain provisions that were deemed unnecessary or burdensome and may have hindered broader use of the license.

For example, version 1.0 of the license included obligations that data recipients preserve attribution notices in the datasets. For context, attribution notices or statements are standard in the software sphere, where a company that releases software built on open source components has to credit the creators of these components in its own software license. But the Linux Foundation said feedback it received from the community and lawyers representing companies involved in open data projects pointed to challenges around associating attributions with data (or versions of datasets).

So while data source attribution is still an option, and might make sense for specific projects — particularly where transparency is paramount — it is no longer a condition for businesses looking to share data under the new permissive license. The chief remaining obligation is that the main community data license agreement text be included with the new datasets…(More)”.

Civic Space Scan of Finland

OECD Report: “At the global level, civic space is narrowing and thus efforts to protect and promote it are more important than ever. The OECD defines Civic Space as the set of legal, policy, institutional, and practical conditions necessary for non-governmental actors to access information, express themselves, associate, organise, and participate in public life. This document presents the Civic Space Scan of Finland, which was undertaken at the request of the Finnish government and is the first OECD report of its kind. OECD Civic Space Scans in particular assess how governments protect and promote civic space in each national context and propose ways to strengthen existing frameworks and practices. The Scan assesses four key dimensions of civic space: civic freedoms and rights, media freedoms and digital rights, the enabling environment for civil society organisations, and civic participation in policy and decision making. Each respective chapter of the report contains actionable recommendations for the Government of Finland. As part of the scan process, a citizens’ panel – also overseen by the OECD – was held in February 2021 and generated a wide range of recommendations for the government from a representative microcosm of Finnish society….(More)”.

Public Administration and Democracy: The Virtue and Limit of Participatory Democracy as a Democratic Innovation

Paper by Sirvan karimi: “The expansion of public bureaucracy has been one of the most significant developments that has marked societies particularly, Western liberal democratic societies. Growing political apathy, citizen disgruntlement and the ensuing decline in electoral participation reflects the political nature of governance failures. Public bureaucracy, which has historically been saddled with derogatory and pejorative connotations, has encountered fierce assaults from multiple fronts. Out of theses sharp criticisms of public bureaucracy that have emanated from both sides of the ideological spectrum, attempts have been made to popularize and advance citizen participation in both policy formulation and policy implementation processes as innovations to democratize public administration. Despite their virtue, empowering connotations and spirit-uplifting messages to the public, these proposed practices of democratic innovations not only have their own shortcomings and are conducive to exacerbating the conditions that they are directed to ameliorate but they also have the potential to undermine the traditional administrative and political accountability mechanisms….(More)”.

Realtime Climate

Climate Central …:”launched this tool to help meteorologists and journalists cover connections between weather, news, and climate in real time, and to alert public and private organizations and individuals about particular local conditions related to climate change, its impacts, or its solutions.

Realtime Climate monitors local weather and events across the U.S. and generates alerts when certain conditions are met or expected. These alerts provide links to science-based analyses and visualizations—including locality-specific, high-quality graphics—that can help explain events in the context of climate change….

Alerts are sent when particular conditions occur or are forecast to occur in the next few days. Examples include:

  • Unusual heat (single day and multi-day)
  • Heat Index
  • Unusual Rainfall
  • Coastal Flooding
  • Air Quality
  • Allergies
  • Seasonal shifts (spring leaf-out, etc.)
  • Ice/snow cover (Great Lakes)
  • Cicadas
  • High local or regional production of solar or wind energy

More conditions will be added soon, including:

  • Drought
  • Wildfire
  • and many more…(More)”.


About: “Metroverse is an urban economy navigator built at the Growth Lab at Harvard University. It is based on over a decade of research on how economies grow and diversify and offers a detailed look into the specialization patterns of cities.

As a dynamic resource, the tool is continually evolving with new data and features to help answer questions such as:

  • What is the economic composition of my city?
  • How does my city compare to cities around the globe?
  • Which cities look most like mine?
  • What are the technological capabilities that underpin my city’s current economy?
  • Which growth and diversification paths does that suggest for the future?

As city leaders, job seekers, investors and researchers grapple with 21st century urbanization challenges, the answer to these questions are fundamental to understanding the potential of a city.

Metroverse delivers new insights on these questions by placing a city’s technological capabilities and knowhow at the heart of its growth prospects, where the range and nature of existing capabilities strongly influences how future diversification unfolds. Metroverse makes visible what a city is good at today to help understand what it can become tomorrow…(More)”.

Moving up: Promoting workers’ upward mobility using network analysis

Report by Marcela Escobari, Ian Seyal and Carlos Daboin Contreras: “The U.S. economy faces a mobility crisis. After decades of rising inequality, stagnating wages, and a shrinking middle class, many American workers find it harder and harder to get ahead. COVID-19 accentuated a stark divide, battering a two-tiered labor force with millions of low-wage workers lacking job security and benefits—as the long-term trends of globalization, digitalization, and automation continue to displace jobs and disrupt career paths.

To address this crisis and create an economy that works for everyone, policymakers and business leaders must act boldly and urgently. But the challenge of low mobility is complex and driven by many factors, with significant heterogeneity across regions, sectors, and demographic groups. When diagnostics fail to disentangle the complexity, our standard policy responses—centered on education, reskilling, and other reemployment services to help workers adapt—fall short.

This report offers a new approach to better understand the contours of mobility: Who is falling behind, where, and by how much. Using data on hundreds of thousands of real workers’ occupational transitions, we use network analysis to create a multidimensional map of the labor market, revealing a landscape riddled with mobility gaps and barriers. Workers in low-wage occupations face particular hurdles, and persistent racial and gender disparities hold some workers back more than others.

Even so, many workers travel on pathways to economic mobility. By showing where existing pathways can be expanded and where new ones are needed, this report helps policymakers, community organizations, higher education institutions, and business leaders better understand the challenge of mobility and see where and how to intervene, in order to help more workers move up faster….(More)”.

Serving the Citizens—Not the Bureaucracy

Report by Sascha Haselmayer: “In a volatile and changing world, one government function is in a position to address challenges ranging from climate change to equity to local development: procurement. Too long confined to a mission of cost savings and compliance, procurement—particularly at the local level, where decisions have a real and immediate impact on citizens—has the potential to become a significant catalyst of change.

In 2021 alone, cities around the globe will spend an estimated $6.4 trillion, or 8 percent of GDP, on procurement.1 Despite this vast buying power, city procurement faces several challenges, including resistance to the idea that procurement can be creative, strategic, economically formidable—and even an affirming experience for professional staff, citizens, civil society organizations, and other stakeholders.

Unfortunately, city procurement is far from ready to overcome these hurdles. Interviews with city leaders and procurement experts point to a common failing: city procurement today is structured to serve bureaucracies—not citizens.

City procurement is in a state of creative tension. Leaders want it to be a creative engine for change, but they underfund procurement teams and foster a compliance culture that leaves no room for much-needed creative and critical thinking. In short: procurement needs a mission.

In this report, we propose cities reimagine procurement as a public service, which can unlock a world of ideas for change and improvement. The vision presented in this report is based on six strategic measures that can help cities get started. The path forward involves not only taking concrete actions, such as reducing barriers to participation of diverse suppliers, but also adopting a new mindset about the purpose and potential of procurement. By doing so, cities can reduce costs and develop creative, engaging solutions to citywide problems. We also offer detailed insights, ideas, and best practices for how practitioners can realize this new vision.

Better city procurement offers the promise of a vast return on investment. Cost savings stand to exceed 15 percent across the board, and local development may benefit by multiplying the participation of small and disadvantaged businesses. Clarity of mission and the required professional skills can lead to new, pioneering innovations. Technology and the right data can lead to sustained performance and better outcomes. A healthy supplier ecosystem can deliver new supplier talent that is aligned with the goals of the city to reduce carbon emissions, serve complex needs, and diversify the supply chain.

All of this not in service of the bureaucracy but of the citizen….(More)”.

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems

Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
NIST’s new publication proposes a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs — a music selection algorithm and an AI that assists with cancer diagnosis — may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.Credit: N. Hanacek/NIST

National Institute of Standards and Technology (NIST): ” Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems….(More)”.

Mass, Computer-Generated, and Fraudulent Comments

Report by Steven J. Balla et al: “This report explores three forms of commenting in federal rulemaking that have been enabled by technological advances: mass, fraudulent, and computer-generated comments. Mass comments arise when an agency receives a much larger number of comments in a rulemaking than it typically would (e.g., thousands when the agency typically receives a few dozen). The report focuses on a particular type of mass comment response, which it terms a “mass comment campaign,” in which organizations orchestrate the submission of large numbers of identical or nearly identical comments. Fraudulent comments, which we refer to as “malattributed comments” as discussed below, refer to comments falsely attributed to persons by whom they were not, in fact, submitted. Computer-generated comments are generated not by humans, but rather by software algorithms. Although software is the product of human actions, algorithms obviate the need for humans to generate the content of comments and submit comments to agencies.

This report examines the legal, practical, and technical issues associated with processing and responding to mass, fraudulent, and computer-generated comments. There are cross-cutting issues that apply to each of these three types of comments. First, the nature of such comments may make it difficult for agencies to extract useful information. Second, there are a suite of risks related to harming public perceptions about the legitimacy of particular rules and the rulemaking process overall. Third, technology-enabled comments present agencies with resource challenges.

The report also considers issues that are unique to each type of comment. With respect to mass comments, it addresses the challenges associated with receiving large numbers of comments and, in particular, batches of comments that are identical or nearly identical. It looks at how agencies can use technologies to help process comments received and at how agencies can most effectively communicate with public commenters to ensure that they understand the purpose of the notice-and-comment process and the particular considerations unique to processing mass comment responses. Fraudulent, or malattributed, comments raise legal issues both in criminal and Administrative Procedure Act (APA) domains. They also have the potential to mislead an agency and pose harms to individuals. Computer-generated comments may raise legal issues in light of the APA’s stipulation that “interested persons” are granted the opportunity to comment on proposed rules. Practically, it can be difficult for agencies to distinguish computer-generated comments from traditional comments (i.e., those submitted by humans without the use of software algorithms).

While technology creates challenges, it also offers opportunities to help regulatory officials gather public input and draw greater insights from that input. The report summarizes several innovative forms of public participation that leverage technology to supplement the notice and comment rulemaking process.

The report closes with a set of recommendations for agencies to address the challenges and opportunities associated with new technologies that bear on the rulemaking process. These recommendations cover steps that agencies can take with respect to technology, coordination, and docket management….(More)”.