Paper by Godofredo Jr Ramizo: “Governments around the world are launching projects that embed artificial intelligence (AI) in the delivery of public services. How can government officials navigate the complexities of AI projects and deliver successful outcomes? Using a review of the existing literature and interviews with senior government officials from Hong Kong, Malaysia, and Singapore who have worked on Smart City and similar AI-driven projects, this paper demonstrates the diversity of government AI projects and identifies practical lessons that help safeguard public interest. I make two contributions. First, I show that we can classify government AI projects based on their level of importance to government functions and the level of organisational resources available to them. These two dimensions result in four types of AI projects, each with its own risks and appropriate strategies. Second, I propose five general lessons for government AI projects in any field, and outline specific measures appropriate to each of the aforementioned types of AI projects….(More)”.
Enabling Trusted Data Collaboration in Society
Launch of Public Beta of the Data Responsibility Journey Mapping Tool: “Data Collaboratives, the purpose-driven reuse of data in the public interest, have demonstrated their ability to unlock the societal value of siloed data and create real-world impacts. Data collaboration has been key in generating new insights and action in areas like public health, education, crisis response, and economic development, to name a few. Designing and deploying a data collaborative, however, is a complex undertaking, subject to risks of misuse of data as well as missed use of data that could have provided public value if used effectively and responsibly.

Today, The GovLab is launching the public beta of a new tool intended to help Data Stewards — responsible data leaders across sectors — and other decision-makers assess and mitigate risks across the life cycle of a data collaborative. The Data Responsibility Journey is an assessment tool for Data Stewards to identify and mitigate risks, establish trust, and maximize the value of their work. Informed by The GovLab’s long standing research and practice in the field, and myriad consultations with data responsibility experts across regions and contexts, the tool aims to support decision-making in public agencies, civil society organizations, large businesses, small businesses, and humanitarian and development organizations, in particular.
The Data Responsibility Journey guides users through important questions and considerations across the lifecycle of data stewardship and collaboration: Planning, Collecting, Processing, Sharing, Analyzing, and Using. For each stage, users are asked to consider whether important data responsibility issues have been taken into account as part of their implementation strategy. When users flag an issue as in need of more attention, it is automatically added to a customized data responsibility strategy report providing actionable recommendations, relevant tools and resources, and key internal and external stakeholders that could be engaged to help operationalize these data responsibility actions…(More)”.
A review of the evidence on developing and supporting policy and practice networks
Report by Ilona Haslewood: “In recent years, the Carnegie UK Trust has been involved in coordinating, supporting, and participating in a range of different kinds of networks. There are many reasons that people choose to develop networks as an approach to achieving a goal. We were interested in building our understanding of the evidence on the effectiveness of networks as a vehicle for policy and practice change.
In Autumn 2020, we began working with Ilona Haslewood to explore how to define a network, when it is appropriate to use this approach to achieve a particular goal, and what is the role of charitable foundations in supporting the development of networks. These questions, and more, are examined in A review of the evidence on developing and supporting policy and practice networks, which was written by Ilona Haslewood. This review of evidence forms part of a broader exploration of the role of networks, which includes a case study summary of A Better Way….(More)”
A growing problem of ‘deepfake geography’: How AI falsifies satellite images
Kim Eckart at UW News: “A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity.
Both images exemplify what a new University of Washington-led study calls “location spoofing.” The photos — created by different people, for different purposes — are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such “deepfake geography” could become a growing problem.
So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.
“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the UW and lead author of the study, which published April 21 in the journal Cartography and Geographic Information Science. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”
As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That’s due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term “paper towns” describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of “Beatosu and “Goblu,” a play on “Beat OSU” and “Go Blue,” because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map….(More)”.
How to make good group decisions
Report by Nesta: “The report has five sections that cover different dimensions of group decisions: group composition, group dynamics, the decision making process, the decision rule and uncertainty….Key takeaways:
- Diversity is the most important factor for a group’s collective intelligence. Both identity and functional (e.g. different skills and experience levels) diversity are necessary for better problem solving and decision making.
- Increasing the size of the decision making group can help to increase diversity, skills and creativity. Organisations could be much better at leveraging the wisdom of the crowd for certain tasks such as idea generation, prioritisation of options (especially eliminating bad options), and accurate forecasts.
- A quick win for decision makers is to focus on developing cross-cutting skills within teams. Important skills to train in your teams include probabilistic reasoning to improve risk analysis, cognitive flexibility to make full use of available information and perspective taking to correct for assumptions..
- It’s not always efficient for groups to push themselves to find the optimal solution or group consensus, and in many cases they don’t need to. ‘Satisficing’ helps to maintain quality under pressure by agreeing in advance what is ‘good enough’.
- Introducing intermittent breaks where group members work independently is known to improve problem solving for complex tasks. The best performing teams tend to have periods of intense communication with little or no interaction in between.
- When the external world is unstable, like during a financial crisis or political elections, traditional sources of expertise often fail due to overconfidence. This is when novel data and insights gathered through crowdsourcing or collective intelligence methods that capture frontline experience are most important….(More)”.
Developing a Data Reuse Strategy for Solving Public Problems
The Data Stewards Academy…A self-directed learning program from the Open Data Policy Lab (The GovLab): “Communities across the world face unprecedented challenges. Strained by climate change, crumbling infrastructure, growing economic inequality, and the continued costs of the COVID-19 pandemic, institutions need new ways of solving public problems and improving how they operate.
In recent years, data has been increasingly used to inform policies and interventions targeted at these issues. Yet, many of these data projects, data collaboratives, and open data initiatives remain scattered. As we enter into a new age of data use and re-use, a third wave of open data, it is more important than ever to be strategic and purposeful, to find new ways to connect the demand for data with its supply to meet institutional objectives in a socially responsible way.
This self-directed learning program, adapted from a selective executive education course, will help data stewards (and aspiring data stewards) develop a data re-use strategy to solve public problems. Noting the ways data resources can inform their day-to-day and strategic decision-making, the course provides learners with ways they can use data to improve how they operate and pursue goals in the public’s interests. By working differently—using agile methods and data analytics—public, private, and civil sector leaders can promote data re-use and reduce data access inequities in ways that advance their institution’s goals.
In this self-directed learning program, we will teach participants how to develop a 21st century data strategy. Participants will learn:
- Why It Matters: A discussion of the three waves of open data and how data re-use has proven to be transformative;
- The Current State of Play: Current practice around data re-use, including deficits of current approaches and the need to shift from ad hoc engagements to more systematic, sustainable, and responsible models;
- Defining Demand: Methodologies for how organizations can formulate questions that data can answer; and make data collaboratives more purposeful;
- Mapping Supply: Methods for organizations to discover and assess the open and private data needed to answer the questions at hand that potentially may be available to them;
- Matching Supply with Demand: Operational models for connecting and meeting the needs of supply- and demand-side actors in a sustainable way;
- Identifying Risks: Overview of the risks that can emerge in the course of data re-use;
- Mitigating Risks and Other Considerations: Technical, legal and contractual issues that can be leveraged or may arise in the course of data collaboration and other data work; and
- Institutionalizing Data Re-use: Suggestions for how organizations can incorporate data re-use into their organizational structure and foster future collaboration and data stewardship.
The Data Stewardship Executive Education Course was designed and implemented by program leads Stefaan Verhulst, co-founder and chief research development officer at the GovLab, and Andrew Young, The GovLab’s knowledge director, in close collaboration with a global network of expert faculty and advisors. It aims to….(More)”.

Bridging the digital divide for underserved communities
Report by Deloitte: “…This “digital divide” was first noted more than 25 years ago as consumer communications needs shifted from landline voice to internet access. The economics of broadband spawned availability, adoption, and affordability disparities between rural and urban geographies and between lower- and higher-income segments. Today, the digital divide still presents a significant gap after more than $100 billion of infrastructure investment has been allocated by the US government over the past decade to address this issue. The current debate regarding additional funds for broadband deployment implies that further examination is warranted regarding how to get to broadband for all and achieve the resulting economic prosperity.
Quantifying the economic impact of bridging the digital divide clearly shows the criticality of broadband infrastructure to the US economy. Deloitte developed economic models to evaluate the relationship between broadband and economic growth. Our models indicate that a 10-percentage-point increase of broadband penetration in 2016 would have resulted in more than 806,000 additional jobs in 2019, or an average annual increase of 269,000 jobs. Moreover, we found a strong correlation between broadband availability and jobs and GDP growth. A 10-percentage-point increase of broadband access in 2014 would have resulted in more than 875,000 additional US jobs and $186B more in economic output in 2019. The analysis also showed that higher broadband speeds drive noticeable improvements in job growth, albeit with diminishing returns. As an example, the gain in jobs from 50 to 100 Mbps is more than the gain in jobs from 100 to 150 Mbps….(More)”.
WHO, Germany launch new global hub for pandemic and epidemic intelligence
Press Release: “The World Health Organization (WHO) and the Federal Republic of Germany will establish a new global hub for pandemic and epidemic intelligence, data, surveillance and analytics innovation. The Hub, based in Berlin and working with partners around the world, will lead innovations in data analytics across the largest network of global data to predict, prevent, detect prepare for and respond to pandemic and epidemic risks worldwide.
H.E. German Federal Chancellor Dr Angela Merkel said: “The current COVID-19 pandemic has taught us that we can only fight pandemics and epidemics together. The new WHO Hub will be a global platform for pandemic prevention, bringing together various governmental, academic and private sector institutions. I am delighted that WHO chose Berlin as its location and invite partners from all around the world to contribute to the WHO Hub.”
The WHO Hub for Pandemic and Epidemic Intelligence is part of WHO’s Health Emergencies Programme and will be a new collaboration of countries and partners worldwide, driving innovations to increase availability and linkage of diverse data; develop tools and predictive models for risk analysis; and to monitor disease control measures, community acceptance and infodemics. Critically, the WHO Hub will support the work of public health experts and policy-makers in all countries with insights so they can take rapid decisions to prevent and respond to future public health emergencies.
“We need to identify pandemic and epidemic risks as quickly as possible, wherever they occur in the world. For that aim, we need to strengthen the global early warning surveillance system with improved collection of health-related data and inter-disciplinary risk analysis,” said Jens Spahn, German Minister of Health. “Germany has consistently been committed to support WHO’s work in preparing for and responding to health emergencies, and the WHO Hub is a concrete initiative that will make the world safer.”
Working with partners globally, the WHO Hub will drive a scale-up in innovation for existing forecasting and early warning capacities in WHO and Member States. At the same time, the WHO Hub will accelerate global collaborations across public and private sector organizations, academia, and international partner networks. It will help them to collaborate and co-create the necessary tools for managing and analyzing data for early warning surveillance. It will also promote greater access to data and information….(More)”.
Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century
Lynne Parker at the AI.gov website: “Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century. Nearly every sector of the economy and society has been affected by the capabilities and potential of AI. AI is enabling farmers to grow food more efficiently, medical researchers to better understand and treat COVID-19, scientists to develop new materials, transportation professionals to deliver more goods faster and with less energy, weather forecasters to more accurately predict the tracks of hurricanes, and national security protectors to better defend our Nation.
At the same time, AI has raised important societal concerns. What is the impact of AI on the changing nature of work? How can we ensure that AI is used appropriately, and does not result in unfair discrimination or bias? How can we guard against uses of AI that infringe upon human rights and democratic principles?
These dual perspectives on AI have led to the concept of “trustworthy AI”. Trustworthy AI is AI that is designed, developed, and used in a manner that is lawful, fair, unbiased, accurate, reliable, effective, safe, secure, resilient, understandable, and with processes in place to regularly monitor and evaluate the AI system’s performance and outcomes.
Achieving trustworthy AI requires an all-of-government and all-of-Nation approach, combining the efforts of industry, academia, government, and civil society. The Federal government is doing its part through a national strategy, called the National AI Initiative Act of 2020 (NAIIA). The National AI Initiative (NAII) builds upon several years of impactful AI policy actions, many of which were outcomes from EO 13859 on Maintaining American Leadership in AI.
Six key pillars define the Nation’s AI strategy:
- prioritizing AI research and development;
- strengthening AI research infrastructure;
- advancing trustworthy AI through technical standards and governance;
- training an AI-ready workforce;
- promoting international AI engagement; and
- leveraging trustworthy AI for government and national security.
Coordinating all of these efforts is the National AI Initiative Office, which is legislated by the NAIIA to coordinate and support the NAII. This Office serves as the central point of contact for exchanging technical and programmatic information on AI activities at Federal departments and agencies, as well as related Initiative activities in industry, academia, nonprofit organizations, professional societies, State and tribal governments, and others.
The AI.gov website provides a portal for exploring in more depth the many AI actions, initiatives, strategies, programs, reports, and related efforts across the Federal government. It serves as a resource for those who want to learn more about how to take full advantage of the opportunities of AI, and to learn how the Federal government is advancing the design, development, and use of trustworthy AI….(More)”
Open Hardware: An Opportunity to Build Better Science
Report by Alison Parker et al: “Today’s research infrastructure, including scientific hardware, is unevenly distributed in the scientific community, severely limiting collaboration, customization, and impact. Open hardware for science provides an alternative approach to reliance on expensive and proprietary instrumentation while giving “people the freedom to control their technology while sharing knowledge and encouraging commerce through the open exchange of design.”
Open hardware can be modified and recombined to build diverse libraries of tools that serve as a freely available resource for use across several disciplines. By improving access to tools, open hardware for science encourages collaboration, accelerates innovation, and improves scientific reproducibility and repeatability. Open hardware for science is often less expensive than proprietary equivalents, allowing research laboratories to stretch funding further. Beyond scientific research, open hardware has proven to benefit and impact a number of complementary policy priorities including: broadening public participation in science, accessible experiential STEM education, crisis response, and improving distributed manufacturing capabilities.
Because of recent, bipartisan progress in open science, the U.S. government is well positioned to elevate and enhance the impact of open hardware in American science. By addressing key implementation challenges and prioritizing open hardware for science, we as a nation can build better infrastructure for future science, cement U.S. scientific leadership and innovation, and help the U.S. prepare for future crises. This report addresses the need to build a stronger foundation for science by prioritizing open hardware, describes the unique benefits of open hardware alongside complementary policy priorities, and briefly lays out implementation challenges to overcome. …(More)”.