Open Data for Social Impact Framework


Framework by Microsoft: “The global pandemic has shown us the important role of data in understanding, assessing, and taking action to solve the challenges created by COVID-19. However, nearly all organizations, large and small, still struggle to make data relevant to their work. Despite the value data provides, many organizations fail to harness its power to improve outcomes.

Part of this struggle stems from the “data divide” – the gap that exists between countries and organizations that have effective access to data to help them innovate and solve problems and those that do not. To close this divide, Microsoft launched the Open Data Campaign in 2020 to help realize the promise of more open data and data collaborations that drive innovation.

One of the key lessons we’ve learned from the Campaign and the work we’ve been doing with our partners, the Open Data Institute and The GovLab, is that the ability to access and use data to improve outcomes involves much more than technological tools and the data itself. It is also important to be able to leverage and share the experiences and practices that promote effective data collaboration and decision-making. This is especially true when it comes to working with governments, multi-lateral organizations, nonprofits, research institutions, and others who seek to open and reuse data to address important social issues, particularly those faced by developing countries.

Put another way, just having access to data and technology does not magically create value and improve outcomes. Making the most of open data and data collaboration requires thinking about how an organization’s leadership can commit to making data useful towards its mission, defining the questions it wants to answer with data, identifying the skills its team needs to use data, and determining how best to develop and establish trust among collaborators and communities served to derive more insight and benefit from data.

The Open Data for Social Impact Framework is a tool leaders can use to put data to work to solve the challenges most important to them. Recognizing that not all data can be made publicly accessible, we see the tremendous benefits that can come from advancing more open data, whether that takes shape as trusted data collaborations or truly open and public data. We use the phrase ‘social impact’ to mean a positive change towards addressing a societal problem, such as reducing carbon emissions, closing the broadband gap, building skills for jobs, and advancing accessibility and inclusion.

We believe in the limitless opportunities that opening, sharing, and collaborating around data can create to draw out new insights, make better decisions, and improve efficiencies when tackling some of the world’s most pressing challenges….(More)”.

Data Literacy for the Public Sector: Lessons from Early Pioneers in the U.S.


Paper by Nick Hart, Adita Karkera, and Valerie Logan: “Advances in the access, collection, management, analysis, and use of data across public sector organizations substantially contributed to steady improvements in services, efficiency of operations, and effectiveness of government programs. The experience of citizens, beneficiaries, managers, and data experts is also evolving as data becomes pervasive and more seamlessly integrated within decision-making processes. In order for agencies to effectively engage in the ever-changing data landscape, organizational data literacy capacity and program models can help ensure individuals across the workforce can read, write, and communicate with data in the context of their role.

Data and analytics are no longer “just” for specialists, such as data engineers and data scientists; rather, data literacy is now increasingly recognized as a core workforce competency. Fortunately, in the United States several pioneers have emerged in strategically advancing data literacy programs and activities at the organizational level, providing benefits to individuals in the public sector workforce. Pioneering programs are those that recognize data literacy as more than training. They view data literacy as a holistic set of activities program to engage employees at all levels with data, develop employees with relevant skills, and enable scale of data literacy by augmenting employees’ skills with guided learning support and resources.

Agencies should begin by crafting the case for change. As is common with any emerging field, varying definitions and interpretations of “data literacy” are prevalent, which can affect program design. Being explicit in what problems are being solved for, as well as the needs and drivers to be addressed with a data literacy program or capacity, are vital to mitigate false starts…(More)”.

The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean


OECD Report: “Governments can use artificial intelligence (AI) to design better policies and make better and more targeted decisions, enhance communication and engagement with citizens, and improve the speed and quality of public services. The Latin America and the Caribbean (LAC) region is seeking to leverage the immense potential of AI to promote the digital transformation of the public sector. The OECD, in collaboration with CAF, Development Bank of Latin America, prepared this report to help national governments in the LAC region understand the current regional baseline of activities and capacities for AI in the public sector; to identify specific approaches and actions they can take to enhance their ability to use this emerging technology for efficient, effective and responsive governments; and to collaborate across borders in pursuit of a regional vision for AI in the public sector. This report incorporates a stocktaking of each country’s strategies and commitments around AI in the public sector, including their alignment with the OECD AI Principles. It also includes an analysis of efforts to build key governance capacities and put in place critical enablers for AI in the public sector. It concludes with a series of recommendations for governments in the LAC region….(More)”.

Russian Asset Tracker


Project by OCCRP: “In the wake of Russia’s brutal assault on Ukraine, governments around the world have imposed sanctions on many of Putin’s enablers. But they have learned to keep their wealth obscured, hiring an army of lawyers to hide it in secretive bank accounts and corporate structures that reach far offshore. Figuring out who owns what, and how much of it, is a tall order even for experienced police investigators.

That’s why we decided to follow the trail, tracking down as many of these assets as possible and compiling them in a database for the public to see and use. We started with a list of names of people who “actively participate in the oppression and corruption of Putin’s regime” drawn up by the Anti-Corruption Foundation, led by opposition leader Alexei Navalny. We’ll be expanding it soon to include other Russians sanctioned for corruption or their support of Putin.

We looked for land, mansions, companies, boats, planes, and anything else of value that could be tied through documentary evidence to Putin’s circle. Some of these assets have been reported before. Some are being revealed here for the first time. Some are still to be discovered: We’ll keep searching for more properties and yachts, adding more names, and updating this database regularly. If you are aware of anything we’ve missed, please let us know by filling out this form.

For now, we’ve uncovered over $17.5 billion in assets, and counting….(More)”.

Crypto, web3, and the Metaverse


Policy Brief by Sam Gilbert: “This brief aims to give policymakers an overview of crypto’s core concepts, and highlight some of the policy questions raised by its increasing adoption by citizens and organisations. It begins with a short explanation of the crypto movement’s ideological origins, offers basic primers in cryptocurrencies, blockchain, web3, NFTs, and the metaverse, and concludes with a discussion of the policy implications and suggestions for further reading. Short case studies and a glossary of crypto terminology (denoted by italics) are interspersed throughout. References are made by means of hyperlinks….(More)”.

Theory of Change Workbook: A Step-by-Step Process for Developing or Strengthening Theories of Change


USAID Learning Lab: “While over time theories of change have become synonymous with simple if/then statements, a strong theory of change should actually be a much more detailed, context-specific articulation of how we *theorize* change will happen under a program. Theories of change should articulate:

  • Outcomes: What is the change we are trying to achieve?
  • Entry points: Where is there momentum to create that change? 
  • Interventions: How will we achieve the change? 
  • Assumptions: Why do we think this will work? 

This workbook helps stakeholders work through the process of developing strong theories of change that answers the above questions. 

Five steps for developing a TOC

A strong theory of change process leads to stronger theory of change products, which include: 

  • the theory of change narrative: a 1-3 page description of the context, entry points within the context to enable change to happen, ultimate outcomes that will result from interventions, and assumptions that must hold for the theory of change to work and 
  • a logic model: a visual representation of the theory of change narrative…(More)”

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence


NIST Report: “As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI)….(More)”

The 2022 AI Index: Industrialization of AI and Mounting Ethical Concerns


Blog by Daniel Zhang, Jack Clark, and Ray Perrault: “The field of artificial intelligence (AI) is at a critical crossroad, according to the 2022 AI Index, an annual study of AI impact and progress at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) led by an independent and interdisciplinary group of experts from across academia and industry: 2021 saw the globalization and industrialization of AI intensify, while the ethical and regulatory issues of these technologies multiplied….

The new report shows several key advances in AI in 2021: 

  • Private investment in AI has more than doubled since 2020, in part due to larger funding rounds. In 2020, there were four funding rounds worth $500 million or more; in 2021, there were 15.
  • AI has become more affordable and higher performing. The cost to train an image classification has decreased by 63.6% and training times have improved by 94.4% since 2018. The median price of robotic arms has also decreased fourfold in the past six years.
  • The United States and China have dominated cross-country research collaborations on AI as the total number of AI publications continues to grow. The two countries had the greatest number of cross-country collaborations in AI papers in the last decade, producing 2.7 times more joint papers in 2021 than between the United Kingdom and China—the second highest on the list.
  • The number of AI patents filed has soared—more than 30 times higher than in 2015, showing a compound annual growth rate of 76.9%.

At the same time, the report also highlights growing research and concerns on ethical issues as well as regulatory interests associated with AI in 2021: 

  • Large language and multimodal language-vision models are excelling on technical benchmarks, but just as their performance increases, so do their ethical issues, like the generation of toxic text.
  • Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in publications on related topics over the past four years.
  • Industry has increased its involvement in AI ethics, with 71% more publications affiliated with industry at top conferences from 2018 to 2021. 
  • The United States has seen a sharp increase in the number of proposed bills related to AI; lawmakers proposed 130 laws in 2021, compared with just 1 in 2015. However, the number of bills passed remains low, with only 2% ultimately becoming law in the past six years.
  • Globally, AI regulation continues to expand. Since 2015, 18 times more bills related to AI were passed into law in legislatures of 25 countries around the world and mentions of AI in legislative proceedings also grew 7.7 times in the past six years….(More)”

Mission-oriented innovation


Handbook by Vinnova: “Mission-oriented innovation aims to create change at the system level where everyone involved is involved and drives development. The working method is a tool for achieving jointly set sustainability goals on a broad basis and with great impact.

In this handbook, we tell about Vinnova’s work together with a number of relevant actors to jointly create mission-oriented innovation. You can follow how the actors under 2019-2021 test and develop the working method in the two different areas of food and mobility, respectively. This is a story about how the tool mission-oriented innovation can be used and a guide with concrete tips on how it can be done…(More)”.

The #Data4Covid19 Review


Press Release: “The Governance Lab (The GovLab), an action research center at New York University Tandon School of Engineering, with the support of the Knight Foundation, today announced the launch of The #Data4Covid19 Review. Through this initiative, The GovLab will evaluate how select countries used data to respond to the COVID-19 pandemic. The findings will be used to identify lessons that can be applied to future data-driven crisis management.

The initiative launches within the context of the 2nd anniversary of the announcement that COVID-19 was a global pandemic and the resulting lockdown restrictions. Countries around the world have since undertaken varied approaches to minimizing the spread of the virus and managing the aftermath. Many of these efforts are driven by data. While the COVID-19 pandemic continues to be a global challenge, there have been few attempts to review and evaluate how data use played a role holistically in the global pandemic response.

The #Data4Covid19 Review aims to fill this gap in the current research by providing an assessment of how data was used during the different waves of the pandemic and guidance for the improvement of future data systems. The GovLab will develop case studies and compare a select group of countries from around the world, with the input and support of a distinguished advisory group of public health, technology, and human rights experts. These case studies will investigate how data use impacted COVID-19 responses. Outputs will include recommendations for decision makers looking to improve their capacity to use data in a responsible way for crisis management and an assessment framework that could be used when designing future data-driven crisis responses. By learning from our response to the pandemic, we can better understand how the use of data should be used in crisis management…(More)”.