When Launching a Collaboration, Keep It Agile


Essay by the Stakeholder Alignment Collaborative: “Conventional wisdom holds that large-scale societal challenges require large-scale responses. By contrast, we argue that progress on major societal challenges can and often should begin with small, agile initiatives—minimum viable consortia (MVC)—that learn and adapt as they build the scaffolding for large-scale change. MVCs can address societal challenges by overcoming institutional inertia, opposition, capability gaps, and other barriers because they require less energy for activation, reveal dead ends early on, and can more easily adjust and adapt over time.

Large-scale societal challenges abound, and organizations and institutions are increasingly looking for ways to deal with them. For example, the National Academy of Engineering (NAE) has identified 14 Grand Societal Challenges for “sustaining civilization’s continuing advancement while still improving the quality of life” in the 21st century. They include making solar energy economical, developing carbon sequestration methods, advancing health informatics, and securing cyberspace. The United Nations has set 17 Sustainable Development Goals (SDGs) to achieve by 2030 for a better future for humanity. They include everything from eliminating hunger to reducing inequality.

Tackling such universal goals requires large-scale cooperation, because existing organizations and institutions simply do not have the ability to resolve these challenges independently. Further note that the NAE’s announcement of the challenges stated that “governmental and institutional, political and economic, and personal and social barriers will repeatedly arise to impede the pursuit of solutions to problems.” The United Nations included two enabling SDGs: “peace, justice, and strong institutions” and “partnership for the goals.” The question is how to bring such large-scale partnerships and institutional change into existence.

We are members of the Stakeholder Alignment Collaborative, a research consortium of scholars at different career stages, spanning multiple fields and disciplines. We study collaboration collaboratively and maintain a very flat structure. We have published on multistakeholder consortia associated with science1 and provided leadership and facilitation for the launch and sustainment of many of these consortia. Based on our research into the problem of developing large-scale, multistakeholder partnerships, we believe that MVCs provide an answer.

MVCs are less vulnerable to the many barriers to large-scale solutions, better able to forge partnerships and a more agile framework for making needed adjustments. To demonstrate these points, we focus on examples of MVCs in the domain of scientific research data and computing infrastructure. Research data are essential for virtually all societal challenges, and an upsurge of multistakeholder consortia has occurred in this domain. But the MVC concept is not limited to these challenges, nor to digitally oriented settings. We have chosen this sphere because it offers a diversity of MVC examples for illustration….(More)”. (See also “The Potential and Practice of Data Collaboratives for Migration“).

Theory of Change Workbook: A Step-by-Step Process for Developing or Strengthening Theories of Change


USAID Learning Lab: “While over time theories of change have become synonymous with simple if/then statements, a strong theory of change should actually be a much more detailed, context-specific articulation of how we *theorize* change will happen under a program. Theories of change should articulate:

  • Outcomes: What is the change we are trying to achieve?
  • Entry points: Where is there momentum to create that change? 
  • Interventions: How will we achieve the change? 
  • Assumptions: Why do we think this will work? 

This workbook helps stakeholders work through the process of developing strong theories of change that answers the above questions. 

Five steps for developing a TOC

A strong theory of change process leads to stronger theory of change products, which include: 

  • the theory of change narrative: a 1-3 page description of the context, entry points within the context to enable change to happen, ultimate outcomes that will result from interventions, and assumptions that must hold for the theory of change to work and 
  • a logic model: a visual representation of the theory of change narrative…(More)”

The 2022 AI Index: Industrialization of AI and Mounting Ethical Concerns


Blog by Daniel Zhang, Jack Clark, and Ray Perrault: “The field of artificial intelligence (AI) is at a critical crossroad, according to the 2022 AI Index, an annual study of AI impact and progress at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) led by an independent and interdisciplinary group of experts from across academia and industry: 2021 saw the globalization and industrialization of AI intensify, while the ethical and regulatory issues of these technologies multiplied….

The new report shows several key advances in AI in 2021: 

  • Private investment in AI has more than doubled since 2020, in part due to larger funding rounds. In 2020, there were four funding rounds worth $500 million or more; in 2021, there were 15.
  • AI has become more affordable and higher performing. The cost to train an image classification has decreased by 63.6% and training times have improved by 94.4% since 2018. The median price of robotic arms has also decreased fourfold in the past six years.
  • The United States and China have dominated cross-country research collaborations on AI as the total number of AI publications continues to grow. The two countries had the greatest number of cross-country collaborations in AI papers in the last decade, producing 2.7 times more joint papers in 2021 than between the United Kingdom and China—the second highest on the list.
  • The number of AI patents filed has soared—more than 30 times higher than in 2015, showing a compound annual growth rate of 76.9%.

At the same time, the report also highlights growing research and concerns on ethical issues as well as regulatory interests associated with AI in 2021: 

  • Large language and multimodal language-vision models are excelling on technical benchmarks, but just as their performance increases, so do their ethical issues, like the generation of toxic text.
  • Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in publications on related topics over the past four years.
  • Industry has increased its involvement in AI ethics, with 71% more publications affiliated with industry at top conferences from 2018 to 2021. 
  • The United States has seen a sharp increase in the number of proposed bills related to AI; lawmakers proposed 130 laws in 2021, compared with just 1 in 2015. However, the number of bills passed remains low, with only 2% ultimately becoming law in the past six years.
  • Globally, AI regulation continues to expand. Since 2015, 18 times more bills related to AI were passed into law in legislatures of 25 countries around the world and mentions of AI in legislative proceedings also grew 7.7 times in the past six years….(More)”

The need to represent: How AI can help counter gender disparity in the news


Blog by Sabrina Argoub: “For the first in our new series of JournalismAI Community Workshops, we decided to look at three recent projects that demonstrate how AI can help raise awareness on issues with misrepresentation of women in the news. 

The Political Misogynistic Discourse Monitor is a web application and API that journalists from AzMina, La Nación, CLIP, and DataCrítica developed to uncover hate speech against women on Twitter.

When Women Make Headlines is an analysis by The Pudding of the (mis)representation of women in news headlines, and how it has changed over time. 

In the AIJO project, journalists from eight different organisations worked together to identify and mitigate biases in gender representation in news. 

We invited, Bàrbara Libório of AzMina, Sahiti Sarva of The Pudding, and Delfina Arambillet of La Nación, to walk us through their projects and share insights on what they learned and how they taught the machine to recognise what constitutes bias and hate speech….(More)”.

GovTech Case Studies: Solutions that Work


Worldbank: “A series of GovTech case study notes — GovTech Case Studies: Solutions that Work — provides a better understanding of GovTech focus areas by introducing concrete experiences of adopting GovTech solutions, lessons learned and what worked or did not work.

Take a sneak peek at the first set of case studies which explores GovTech solutions implemented in Brazil, Cambodia, Georgia, Lesotho, Myanmar, and Nigeria….

Image

Read Brazil GovTech Case Study…(More)”.

Crowdsourcing and COVID-19: How public administrations mobilize crowds to find solutions to problems posed by the pandemic


Paper by Ana Colovic, Annalisa Caloffi, and Federica Rossi: “We discuss how public administrations have used crowdsourcing to find solutions to specific problems posed by the COVID-19 pandemic, and to what extent crowdsourcing has been instrumental in promoting open innovation and service co-creation. We propose a conceptual typology of crowdsourcing challenges based on the degree of their openness and collaboration with the crowd that they establish. Using empirical evidence collected in 2020 and 2021, we examine the extent to which these types have been used in practice. We discuss each type of crowdsourcing challenge identified and draw implications for public policy…(More)”.

“Medical Matchmaking” provides personalized insights


Matthew Hempstead at Springwise: “Humanity is a collection of unique individuals who represent a complex mixture of medical realities. Yet traditional medicine is based on a ‘law of averages’ – treating patients based on generalisations about the population as a whole. This law of averages can be misleading, and in a world where the average American spends 52 hours looking for health information online each year, generalisations create misunderstandings. Information provided by ‘Dr. Google’ or Facebook is inadequate and doesn’t account for the specific characteristics of each individual.

Israeli startup Alike has come up with a novel multidisciplinary solution to this problem – using health data and machine learning to match people who are alike on a holistic level. The AI’s matchmaking takes into account considerations such as co-morbidities, lifestyle factors, age, and gender.

Patients are then put into contact with an anonymised community of ‘Alikes’ – people who share their exact clinical journey, lifestyle, and interests. Members of this community can share or receive relevant and personalised insights that help them to better manage their conditions.

The new technology is possible due to regulatory changes that make it possible for everyone to gain instant electronic access to their personal health records. The app allows users to automatically create a health profile through a direct connection with their health provider.

Given the sensitive nature of medical information, Alike has put in place stringent privacy controls. The data shared on the app is completely de-identified, which means all personal identifiers are removed. Every user is verified by their healthcare provider, and further measures including data encryption and data fuzzing are employed. This means that patients can benefit from the insights of other patients while maintaining their privacy…(More)”.

The New Rules of Data Privacy


Essay by Hossein Rahnama and Alex “Sandy” Pentland: “The data harvested from our personal devices, along with our trail of electronic transactions and data from other sources, now provides the foundation for some of the world’s largest companies. Personal data also the wellspring for millions of small businesses and countless startups, which turn it into customer insights, market predictions, and personalized digital services. For the past two decades, the commercial use of personal data has grown in wild-west fashion. But now, because of consumer mistrust, government action, and competition for customers, those days are quickly coming to an end.

For most of its existence, the data economy was structured around a “digital curtain” designed to obscure the industry’s practices from lawmakers and the public. Data was considered company property and a proprietary secret, even though the data originated from customers’ private behavior. That curtain has since been lifted and a convergence of consumer, government, and market forces are now giving users more control over the data they generate. Instead of serving as a resource that can be freely harvested, countries in every region of the world have begun to treat personal data as an asset owned by individuals and held in trust by firms.

This will be a far better organizing principle for the data economy. Giving individuals more control has the potential to curtail the sector’s worst excesses while generating a new wave of customer-driven innovation, as customers begin to express what sort of personalization and opportunity they want their data to enable. And while Adtech firms in particular will be hardest hit, any firm with substantial troves of customer data will have to make sweeping changes to its practices, particularly large firms such as financial institutions, healthcare firms, utilities, and major manufacturers and retailers.

Leading firms are already adapting to the new reality as it unfolds. The key to this transition — based upon our research on data and trust, and our experience working on this issue with a wide variety of firms — is for companies to reorganize their data operations around the new fundamental rules of consent, insight, and flow…(More)”.

Toward A Periodic Table of Open Data in Cities


Essay by Andrew Zahuranec, Adrienne Schmoeker, Hannah Chafetz and Stefaan G Verhulst: “In 2016, The GovLab studied the impact of open data in countries around the world. Through a series of case studies examining the value of open data across sectors, regions, and types of impact, we developed a framework for understanding the factors and variables that enable or complicate the success of open data initiatives. We called this framework the Periodic Table of Open Impact Factors.

Over the years, this tool has attracted substantial interest from data practitioners around the world. However, given the countless developments since 2016, we knew it needed to be updated and made relevant to our current work on urban innovation and the Third Wave of Open Data.

Last month, the Open Data Policy Lab held a collaborative discussion with our City Incubator participants and Council of Mentors. In a workshop setting with structured brainstorming sessions, we introduced the periodic table to participants and asked how this framework could be applied to city governments. We knew that city government often have fewer resources than other levels of government yet benefit from a potentially stronger connection to constituents being served. How might this Periodic Table of Open Data Elements be different at a city government level? We gathered participant and mentor feedback and worked to revise the table.

Today, to celebrate NYC Open Data Week 2022, the celebration of open data in New York, we are happy to release this refined model with a distinctive focus on developing open data strategies within cities. The Open Data Policy Lab is happy to present the Periodic Table of Open Data in Cities.

The Periodic Table of Open Data in Cities

Separated into five categories — Problem and Demand Definition, Capacity and Culture, Governance and Standards, Partnerships, and Risks and Ethical Pitfalls — this table provides a summary of some of the major issues that open data practitioners can think about as they develop strategies for release and use of open data in the communities they serve. We sought to specifically incorporate the needs of city incubators (as determined by our workshop), but the table can be relevant to a variety of stakeholders.

While descriptions for each of these elements are included below, the Periodic Table of Open Data Elements in Cities is an iterative framework and new elements will be perennially added or adjusted in accordance with emerging practices…(More)”.

The #Data4Covid19 Review


Press Release: “The Governance Lab (The GovLab), an action research center at New York University Tandon School of Engineering, with the support of the Knight Foundation, today announced the launch of The #Data4Covid19 Review. Through this initiative, The GovLab will evaluate how select countries used data to respond to the COVID-19 pandemic. The findings will be used to identify lessons that can be applied to future data-driven crisis management.

The initiative launches within the context of the 2nd anniversary of the announcement that COVID-19 was a global pandemic and the resulting lockdown restrictions. Countries around the world have since undertaken varied approaches to minimizing the spread of the virus and managing the aftermath. Many of these efforts are driven by data. While the COVID-19 pandemic continues to be a global challenge, there have been few attempts to review and evaluate how data use played a role holistically in the global pandemic response.

The #Data4Covid19 Review aims to fill this gap in the current research by providing an assessment of how data was used during the different waves of the pandemic and guidance for the improvement of future data systems. The GovLab will develop case studies and compare a select group of countries from around the world, with the input and support of a distinguished advisory group of public health, technology, and human rights experts. These case studies will investigate how data use impacted COVID-19 responses. Outputs will include recommendations for decision makers looking to improve their capacity to use data in a responsible way for crisis management and an assessment framework that could be used when designing future data-driven crisis responses. By learning from our response to the pandemic, we can better understand how the use of data should be used in crisis management…(More)”.