See Plastic in a National Park? Log It on This Website for Science


Article by Angely Mercado: “You’re hiking through glorious nature when you see it—a dirty, squished plastic water bottle along the trail. Instead of picking it up and impotently cursing the litterer, you can now take another small helpful step—you can report the trash to a new data project that aims to inspire policy change. Environmental nonprofit 5 Gyres is asking national park visitors in the U.S. to log trash they see through a new site called TrashBlitz.

The organization, which is dedicated to reducing plastic pollution, created TrashBlitz to gather data on how much, and what kind, of plastic and other litter is clogging our parks. They want to encourage realistic plastic pollution reduction plans for all 63 national parks.

Once registered on the TrashBlitz website, park visitors can specify the types of trash that they’ve spotted, such as if the discarded item was used for food packaging. According to 5 Gyres, the data will contribute to a report to be published this fall on the top items discarded, the materials, and the brands that have created the most waste across national parks…(More)”.

How Three False Starts Stifle Open Social Science


Article by Patrick Dunleavy: “Open social science is new, and like any beginner is still finding its way. However, to a large extent we are still operating in the shadow of open science (OS) in the Science, technology, engineering, mathematics, and medicine, or STEMM, disciplines. Nearly a decade ago an influential Royal Society report argued:

‘Open science is often effective in stimulating scientific discovery, [and] it may also help to deter, detect and stamp out bad science. Openness facilitates a systemic integrity that is conducive to early identification of error, malpractice and fraud, and therefore deters them. But this kind of transparency only works when openness meets standards of intelligibility and assessability – where there is intelligent openness’.

More recently, the Turing Way project defined open science far more broadly as a range of measures encouraging reproducibility, replication, robustness, and the generalisability of research. Alongside CIVICA researchers we have put forward an agenda for progressing open social science in line with these ambitions. Yet for open social science to take root it must develop an ‘intelligent’ concept of openness, one that is adapted to the wide range of concerns that our discipline group addresses, and is appropriate for the sharply varying conditions in which social research must be carried out.

This task has been made more difficult by a number of premature and partial efforts to ‘graft’ an ‘open science’ concept from STEMM disciplines onto the social sciences. Three false starts have already been made and have created misconceptions about open social science. Below, I want to show how each of the strategies may actually work to obstruct the wider development of open social science.

Bricolage – Reading across directly from STEMM

This approach sees open social science as just about picking up (not quite at random) the best-known or most discussed individual components of open science in STEMM disciplines  – focusing on specific things like open access publishing, the FAIR principles for data management, replication studies, or the pre-registration of hypotheses…(More)”.

How Does the Public Sector Identify Problems It Tries to Solve with AI?


Article by Maia Levy Daniel: “A correct analysis of the implementation of AI in a particular field or process needs to start by identifying if there actually is a problem to be solved. For instance, in the case of job matching, the problem would be related to the levels of unemployment in the country, and presumably addressing imbalances in specific fields. Then, would AI be the best way to address this specific problem? Are there any alternatives? Is there any evidence that shows that AI would be a better tool? Building AI systems is expensive and the funds being used by the public sector come from taxpayers. Are there any alternatives that could be less expensive? 

Moreover, governments must understand from the outset that these systems could involve potential risks for civil and human rights. Thus, it should be justified in detail why the government might be choosing a more expensive or riskier option. A potential guide to follow is the one developed by the UK’s Office for Artificial Intelligence on how to use AI in the public sector. This guide includes a section specifically devoted to how to assess whether AI is the right solution to a problem.

AI is such a buzzword that it has become appealing for governments to use as a solution to any public problem, without even starting to look for available alternatives. Although automation could accelerate decision-making processes, speed should not be prioritized over quality or over human rights protection. As Daniel Susser argues in his recent paper, the speed at which automated decisions are reached has normative implications. By incorporating digital technologies in decision-making processes, temporal norms and values that govern them are impacted, disrupting prior norms, re-calibrating balanced trade-offs, or displacing automation’s costs. As Susser suggests, speed is not necessarily bad; however, “using computational tools to speed up (or slow down) certain decisions is not a ‘neutral’ adjustment without further explanations.” 

So, conducting a thorough diagnosis including the identification of the specific problem to address and the best way to address it is key to protecting citizens’ rights. And this is why transparency must be mandatory. As citizens, we have a right to know how these processes are being conceived and designed, the reasons governments choose to implement technologies, as well as the risks involved.

In addition, maybe a good way to ultimately approach the systemic problem and change the structure of incentives is to stop using the pretentious terms “artificial intelligence”, “AI”, and “machine learning”, as Emily Tucker, the Executive Director of the Center on Privacy & Technology at Georgetown Law Center announced the Center would do. As Tucker explained, these terms are confusing for the average person, and the way they are typically employed makes us think it’s a machine rather than human beings making the decisions. By removing marketing terms from the equation and giving more visibility to the humans involved, these technologies may not ultimately seem so exotic…(More)”.

Mapping Urban Trees Across North America with the Auto Arborist Dataset


Google Blog: “Over four billion people live in cities around the globe, and while most people interact daily with others — at the grocery store, on public transit, at work — they may take for granted their frequent interactions with the diverse plants and animals that comprise fragile urban ecosystems. Trees in cities, called urban forests, provide critical benefits for public health and wellbeing and will prove integral to urban climate adaptation. They filter air and water, capture stormwater runoffsequester atmospheric carbon dioxide, and limit erosion and drought. Shade from urban trees reduces energy-expensive cooling costs and mitigates urban heat islands. In the US alone, urban forests cover 127M acres and produce ecosystem services valued at $18 billion. But as the climate changes these ecosystems are increasingly under threat.

Urban forest monitoring — measuring the size, health, and species distribution of trees in cities over time — allows researchers and policymakers to (1) quantify ecosystem services, including air quality improvement, carbon sequestration, and benefits to public health; (2) track damage from extreme weather events; and (3) target planting to improve robustness to climate change, disease and infestation.

However, many cities lack even basic data about the location and species of their trees. …

Today we introduce the Auto Arborist Dataset, a multiview urban tree classification dataset that, at ~2.6 million trees and >320 genera, is two orders of magnitude larger than those in prior work. To build the dataset, we pulled from public tree censuses from 23 North American cities (shown above) and merged these records with Street View and overhead RGB imagery. As the first urban forest dataset to cover multiple cities, we analyze in detail how forest models can generalize with respect to geographic distribution shifts, crucial to building systems that scale. We are releasing all 2.6M tree records publicly, along with aerial and ground-level imagery for 1M trees…(More)”

Police Violence In Puerto Rico: Flooded With Data


Blog by Christine Grillo: “For María Mari-Narváez, a recent decision by the Supreme Court of Puerto Rico was both a victory and a moment of reckoning. The Court granted Kilómetro Cero, a citizen-led police accountability project in Puerto Rico, full access to every use-of-force report filed by the Puerto Rico Police Department since 2014. The decision will make it possible for advocates such as Mari to get a clear picture of how state police officers are using force, and when that use of force crosses the line into abuse. But the court victory flooded her small organization with data.

“We won, finally, and then I realized I was going to be receiving thousands of documents that I had zero capacity to process,” says Mari.

“One of the things that’s important to me when analyzing data is to find out where the gaps are, why those gaps exist, and what those gaps represent.” —Tarak Shah, data scientist

The Court made its decision in April 2021, and the police department started handing over PDF files in July. By the end, there could be up to 10,000 documents that get turned in. In addition to incident reports, the police had to provide their use-of-force database. Combined, the victory provides a complicated mixture of quantitative and qualitative data that can be analyzed to answer questions about what the state police are doing to its citizens during police interventions. In particular, Kilómetro Cero, which Mari founded, wants to find out if some Puerto Ricans are more likely to be victims of police violence than others.

“We’re looking for bias,” says Mari. “Bias against poor people, or people who live in a certain neighborhood. Gender bias. Language bias. Bias against drug users, sex workers, immigrants, people who don’t have a house. We’re trying to analyze the language of vulnerability.”…(More)”.

Narrowing the data gap: World Bank and Microsoft commit to unlocking better development outcomes for persons with disabilities


Blog by Charlotte Vuyiswa McClain-Nhlapo, and Jenny Lay-Flurrie: “Across the world, persons with disabilities remain invisible in the global development agenda. One key reason is because of variances in the availability and use of disability-disaggregated data across organizations and borders.  

While it is estimated that one billion people, or 15 percent of the world’s population, have a disability – more data is needed to understand the true scale of the living conditions and development outcomes for persons with disabilities, and to get clarity on the degree to which persons with disabilities continue to be underserved.  

This reality is a part of what the World Bank calls the disability divide – the gap in societal inclusion for persons with disabilities in all stages of development programs, including education, employment and digital inclusion. The COVID-19 pandemic has exacerbated this risk and exposed some of the existing inequalities faced on a regular basis. 

Many governments around the world use census data to understand a country’s socioeconomic situation and to allocate resources or consider policy to address the needs of its citizens. While every country is on their own journey to leverage data to inform policy and development outcomes, there is an opportunity to bring data on disability together for the global public good, so that groups can more accurately prioritize disability inclusion within global efforts.  

In response to this challenge, the World Bank and Microsoft, in collaboration with the Disability Data Initiative at Fordham University, are partnering to expand both access to and the use of demographics and statistics data to ensure representation of disability, particularly in low- and middle-income countries. The goal of this effort is to develop a public facing, online “disability data hub” to offer information on persons with disabilities across populations, geographies and development indicators.  

Principles for the development of the hub include:  

  • Engaging with the disability community to inform the creation of the hub and its offerings. 
  • Aligning with the United Nations Sustainable Development Goals, which require countries to disaggregate data by disability by 2030. 
  • Taking a holistic approach to data collection on disabilities, including collating and aggregating multiple data sources, such as national household surveys and censuses. 
  • Providing a user-friendly and accessible interface for a wide range of users. 
  • Offering data analysis and accessible visualization tools. 
  • Serving as a knowledge repository by publishing trends and country profiles, offering trainings and capacity building materials and linking to relevant partner resources on disability data disaggregation…(More)”.

Better Data Sharing for Benefits Delivery


Article by Chris Sadler and Claire Park: “Robust federal assistance programs and social services are essential to a thriving society. This is especially the case as people continue to contend with the fallout from the COVID-19 pandemic, which jeopardized livelihoods and put millions out of employment. Government benefits at the federal, state, and local level help people across the country pay for food, housing, health care, and other basic living expenses. But more work is required at the federal level to ensure that these benefits reach everyone in need. For instance, the historic $1.2 trillion Infrastructure Investment and Jobs Act signed into law last year included a $14.2 billion program called the Affordable Connectivity Program (ACP) to help qualifying low-income households pay for internet service. While the program is off to a strong start, improved data sharing between federal agencies, state and local governments, and institutions can leverage existing data from other benefits programs to streamline eligibility processes and ensure those who qualify receive the benefit. Expanding data sharing for benefits eligibility also aligns with one of the goals in the recent executive order to advance racial equity.

We discuss how data sharing could be improved, as well as other steps that the federal government can take to maximize the impact of this benefit on the digital divide. The solutions outlined here can be applied to both current and future programs that help people find housingprepare children for school, and ensure everyone has enough to eat…(More)”.

The modern malaise of innovation: overwhelm, complexity, and herding cats


Blog by Lucy Mason: “But the modern world is too complicated to innovate alone. Coming up with the idea is the easy bit: developing and implementing it inevitably involves navigating complex and choppy waters: multiple people, funding routes, personal agendas, legal complexity, and strategic fuzziness. All too often, great ideas fail to become reality not because the idea wouldn’t work, but because everything in the ecosystem seems (accidentally) designed to prevent it from working.

Given that innovation is a key Government priority, and so many organisations and people are dedicated to make it happen (such as Innovate UK), this lack of success seems odd. The problem does not lie with the R&D base: despite relative underinvestment by the UK Government the UK punches well above its weight in world-leading R&D. Being an entrepreneur is of course, hard work, high risk and prone to failure even for the most dedicated individuals. But are there particular features which inhibit how innovation is developed, scaled, implemented, and adopted in the UK? I would argue there are three key factors at play: overwhelm (too much), complexity (too vague), and ‘herding cats’ (too hard)…(More)”.

How to get to the core of democracy


Blog by Toralf Stark, Norma Osterberg-Kaufmann and Christoph Mohamad-Klotzbach: “…Many criticisms of conceptions of democracy are directed more at the institutional design than at its normative underpinnings. These include such things as the concept of representativeness. We propose focussing more on the normative foundations assessed by the different institutional frameworks than discussing the institutional frameworks themselves. We develop a new concept, which we call the ‘core principle of democracy’. By doing so, we address the conceptual and methodological puzzles theoretically and empirically. Thus, we embrace a paradigm shift.

Collecting data is ultimately meaningless if we do not find ways to assess, summarise and theorise it. Kei Nishiyama argued we must ‘shift our attention away from the concept of democracy and towards concepts of democracy’. By the term concept we, in line with Nishiyama, are following Rawls. Rawls claimed that ‘the concept of democracy refers to a single, common principle that transcends differences and on which everyone agrees’. In contrast with this, ‘ideas of democracy (…) refer to different, sometimes contested ideas based on a common concept’. This is what Laurence Whitehead calls the ‘timeless essence of democracy’….

Democracy is a latent construct and, by nature, not directly observable. Nevertheless, we are searching for indicators and empirically observable characteristics we can assign to democratic conceptions. However, by focusing only on specific patterns of institutions, only sometimes derived from theoretical considerations, we block our view of its multiple meanings. Thus, we’ve no choice but to search behind the scenes for the underlying ‘core’ principle the institutions serve.

The singular core principle that all concepts of democracy seek to realise is political self-efficacy…(More)”.

Political self-efficacy
Source: authors’ own compilation

10 learnings from considering AI Ethics through global perspectives


Blog by Sampriti Saxena and Stefaan G. Verhulst: “Artificial Intelligence (AI) technologies have the potential to solve the world’s biggest challenges. However, they also come with certain risks to individuals and groups. As these technologies become more prevalent around the world, we need to consider the ethical ramifications of AI use to identify and rectify potential harms. Equally, we need to consider the various associated issues from a global perspective, not assuming that a single approach will satisfy different cultural and societal expectations.

In February 2021, The Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Technical University of Munich’s (TUM) Institute for Ethics in Artificial Intelligence (IEAI) launched AI Ethics: Global Perspectives. …A year and a half later, the course has grown to 38 modules, contributed by 40 faculty members representing over 20 countries. Our conversations with faculty members and our experiences with the course modules have yielded a wealth of knowledge about AI ethics. In keeping with the values of openness and transparency that underlie the course, we summarized these insights into ten learnings to share with a broader audience. In what follows, we outline our key lessons from experts around the world.

Our Ten Learnings:

  1. Broaden the Conversation
  2. The Public as a Stakeholder
  3. Centering Diversity and Inclusion in Ethics
  4. Building Effective Systems of Accountability
  5. Establishing Trust
  6. Ask the Right Questions
  7. The Role of Independent Research
  8. Humans at the Center
  9. Our Shared Responsibility
  10. The Challenge and Potential for a Global Framework…(More)”.