Data Science for Social Good: Philanthropy and Social Impact in a Complex World


Book edited by Ciro Cattuto and Massimo Lapucci: “This book is a collection of insights by thought leaders at first-mover organizations in the emerging field of “Data Science for Social Good”. It examines the application of knowledge from computer science, complex systems, and computational social science to challenges such as humanitarian response, public health, and sustainable development. The book provides an overview of scientific approaches to social impact – identifying a social need, targeting an intervention, measuring impact – and the complementary perspective of funders and philanthropies pushing forward this new sector.

TABLE OF CONTENTS


Introduction; By Massimo Lapucci

The Value of Data and Data Collaboratives for Good: A Roadmap for Philanthropies to Facilitate Systems Change Through Data; By Stefaan G. Verhulst

UN Global Pulse: A UN Innovation Initiative with a Multiplier Effect; By Dr. Paula Hidalgo-Sanchis

Building the Field of Data for Good; By Claudia Juech

When Philanthropy Meets Data Science: A Framework for Governance to Achieve Data-Driven Decision-Making for Public Good; By Nuria Oliver

Data for Good: Unlocking Privately-Held Data to the Benefit of the Many; By Alberto Alemanno

Building a Funding Data Ecosystem: Grantmaking in the UK; By Rachel Rank

A Reflection on the Role of Data for Health: COVID-19 and Beyond; By Stefan E. Germann and Ursula Jasper….(More)”

The fight against disinformation and the right to freedom of expression


Report of the European Union: This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the LIBE Committee, aims at finding the balance between regulatory measures to tackle disinformation and the protection of freedom of expression. It explores the European legal framework and analyses the roles of all stakeholders in the information landscape. The study offers recommendations to reform the attention-based, data-driven information landscape and regulate platforms’ rights and duties relating to content moderation…(More)”.

A crowdsourced spreadsheet is the latest tool in Chinese tech worker organizing


Article by JS: “This week, thousands of Chinese tech workers are sharing information about their working schedules in an online spreadsheet. Their goal is to inform each other and new employees about overtime practices at different companies. 

This initiative for work-schedule transparency, titled Working Time, has gone viral. As of Friday—just three days after the project launched—the spreadsheet has already had millions of views and over 6000 entries. The creators also set up group chats on the Tencent-owned messaging platform, QQ, to invite discussion about the project—over 10000 people have joined as participants.

This initiative comes after the explosive 996.ICU campaign which took place in 2019 where hundreds of thousands of tech workers in the country participated in an online effort to demand the end of the 72-hour work week—9am to 9pm, 6 days a week.

This year, multiple tech companies—with encouragement from the government—have ended overtime work practices that forced employees to work on Saturdays (or in some cases, alternating Saturdays). This has effectively ended 996, which was illegal to begin with. While an improvement, the data collected from this online spreadsheet shows that most tech workers still work long hours, either “1095” or “11105” (10am to 9pm or 11am to 10pm, 5 days a week). The spreadsheet also shows a non-negligible number of workers still working 6 days week.

Like the 996.ICU campaign, the creators of this spreadsheet are using GitHub to circulate and share info about the project. The first commit was made on Tuesday, October 12th. Only a few days later, the repo has been starred over 9500 times….(More)”.

The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations


Paper by Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi: “In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment….(More)”.

Who do the people want to govern?


Paper by John R Hibbing et al: “Relative to the well-developed theory and extensive survey batteries on people’s preferences for substantive policy solutions, scholarly understanding of people’s preferences for the mechanisms by which policies should be adopted is disappointing. Theory rarely goes beyond the assumption that people would prefer to rule themselves rather than leave decisions up to elites and measurement rests largely on four items that are not up to the task. In this article, we seek to provide a firmer footing for “process” research by 1) offering an alternative theory holding that people actually want elites to continue to make important political decisions but want them to do so only after acquiring a deep appreciation for the real-world problems facing regular people, and 2) developing and testing a battery of over 50 survey items, appropriate for cross-national research, that extend understanding of how the people want political decisions to be made…(More)”.

Data Stewardship Re-Imagined — Capacities and Competencies


Blog and presentation by Stefaan Verhulst: “In ways both large and small, COVID-OVID-19 has forced us to re-examine every aspect of our political, social, and economic systems. Among the many lessons, policymakers have learned is that existing methods for using data are often insufficient for our most pressing challenges. In particular, we need to find new, innovative ways of tapping into the potential of privately held and siloed datasets that nonetheless contain tremendous public good potential, including complementing and extending official statistics. Data collaboratives are an emerging set of methods for accessing and reusing data that offer tremendous opportunities in this regard. In the last five years, we have studied and initiated numerous data collaboratives, in the process assembling a collection of over 200 example case studies to better understand their possibilities.

Among our key findings is the vital importance and essential role that needs to be played by Data Stewards.

Data stewards do not represent an entirely new profession; rather, their role could be understood as an extension and re-definition of existing organizational positions that manage and interact with data. Traditionally, the role of a data officer was limited either to data integrity or the narrow context of internal data governance and management, with a strong emphasis on technical competencies. This narrow conception is no longer sufficient, especially given the proliferation of data and the increasing potential of data sharing and collaboration. As such, we call for a re-imagination of data stewardship to encompass a wider range of functions and responsibilities, directed at leveraging data assets toward addressing societal challenges and improving people’s lives.

DATA STEWARDSHIP: functions and competencies to enable access to and re-use of data for public benefit in a systematic, sustainable, and responsible way.

In our vision, data stewards are professionals empowered to create public value (including official statistics) by re-using data and data expertise, identifying opportunities for productive cross-sectoral collaboration, and proactively requesting or enabling functional access to data, insights, and expertise. Data stewards are active in both the public and private sectors, promoting trust within and outside their organizations. They are essential to data collaboratives by providing functional access to unlock the potential of siloed data sets. In short, data stewards form a new — and essential — link in the data value chain….(More)”.

What Universities Owe Democracy


Book by Ronald J. Daniels with Grant Shreve and Phillip Spector: “Universities play an indispensable role within modern democracies. But this role is often overlooked or too narrowly conceived, even by universities themselves. In What Universities Owe Democracy, Ronald J. Daniels, the president of Johns Hopkins University, argues that—at a moment when liberal democracy is endangered and more countries are heading toward autocracy than at any time in generations—it is critical for today’s colleges and universities to reestablish their place in democracy.

Drawing upon fields as varied as political science, economics, history, and sociology, Daniels identifies four distinct functions of American higher education that are key to liberal democracy: social mobility, citizenship education, the stewardship of facts, and the cultivation of pluralistic, diverse communities. By examining these roles over time, Daniels explains where colleges and universities have faltered in their execution of these functions—and what they can do going forward.

Looking back on his decades of experience leading universities, Daniels offers bold prescriptions for how universities can act now to strengthen democracy. For those committed to democracy’s future prospects, this book is a vital resource…(More)”.

Slowed canonical progress in large fields of science


Paper by Johan S. G. Chu and James A. Evans: “The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas…(More)”.

Democratizing and technocratizing the notice-and-comment process


Essay by Reeve T. Bull: “…When enacting the Administrative Procedure Act, Congress was not entirely clear on the extent to which it intended the agency to take into account public opinion as reflected in comments or merely to sift the comments for relevant information. This tension has simmered for years, but it never posed a major problem since the vast majority of rules garnered virtually no public interest.

Even now, most rules still generate a very anemic response. Internet submission has vastly simplified the process of filing a comment, however, and a handful of rules generate “mass comment” responses of hundreds of thousands or even millions of submissions. In these cases, as the net neutrality incident showed, individual commenters and even private firms have begun to manipulate the process by using computer algorithms to generate comments and, in some instances, affix false identities. As a result, agencies can no longer ignore the problem.

Nevertheless, technological progress is not necessarily a net negative for agencies. It also presents extraordinary opportunities to refine the notice-and-comment process and generate more valuable feedback. Moreover, if properly channeled, technological improvements can actually provide the remedies to many of the new problems that agencies have encountered. And other, non-technological reforms can address most, if not all of, the other newly emerging challenges. Indeed, if agencies are open-minded and astute, they can both “democratize” the public participation process, creating new and better tools for ascertaining public opinion (to the extent it is relevant in any given rule), and “technocratize” it at the same time, expanding and perfecting avenues for obtaining expert feedback….

As with many aspects of modern life, technological change that once was greeted with naive enthusiasm has now created enormous challenges. As a recent study for the Administrative Conference of the United States (for which I served as a co-consultant) has found, agencies can deploy technological tools to address at least some of these problems. For instance, so-called “deduplication software” can identify and group comments that come from different sources but that contain large blocks of identical text and therefore were likely copied from a common source. Bundling these comments can greatly reduce processing time. Agencies can also undertake various steps to combat unwanted computer-generated or falsely attributed comments, including quarantining such comments and issuing commenting policies discouraging their submission. A recently adopted set of ACUS recommendations partly based on the report offer helpful guidance to agencies on this front.

Unfortunately, as technology evolves, new challenges will emerge. As noted in the ACUS report, agencies are relatively unconcerned with duplicate comments since they possess the technological tools to process them. Yet artificial intelligence has evolved to the point that computer algorithms can produce comments that are both indistinguishable from human comments and at least facially appear to contain unique and relevant information. In one recent study, an algorithm generated and submitted…(More)”

Facial Recognition Technology: Responsible Use Principles and the Legislative Landscape


Report by James Lewis: “…Criticism of FRT is too often based on a misunderstanding about the technology. A good starting point to change this is to clarify the distinction between FRT and facial characterization. FRT compares two images and asks how likely it is that one image is the same as the other. The best FRT is more accurate than humans at matching images. In contrast, “facial analysis” or “facial characterization” examines an image and then tries to characterize it by gender, age, or race. Much of the critique of FRT is actually about facial characterization. Claims about FRT inaccuracy are either out of date or mistakenly talking about facial characterization. Of course, accuracy depends on how FRT is used. When picture quality is poor, accuracy is lower but often still better than the average human. A 2021 report by the National Institute of Standards and Technology (NIST) found that accuracy had improved dramatically and that more accurate systems were less likely to make errors based on race or gender. This confusion hampers the development of effective rules.

Some want to ban FRT, but it will continue to be developed and deployed because of the convenience for consumers and the benefits to public safety. Continued progress in sensors and artificial intelligence (AI) will increase availability and performance of the technologies used for facial recognition. Stopping the development of FRT would require stopping the development of AI, and that is neither possible nor in the national interest. This report provides a list of guardrails to guide the development of law and regulation for civilian use….(More)”.