Public Meetings Thwart Housing Reform Where It Is Needed Most


Interview with Katherine Levine Einstein by Jake Blumgart: “Public engagement can have downsides. Neighborhood participation in the housing permitting process makes existing political inequalities worse, limits housing supply and contributes to the affordability crisis….

In 2019, Katherine Levine Einstein and her co-authors at Boston University produced the first in-depth study of this dynamic, Neighborhood Defenders, providing a unique insight into how hyper-local democracy can produce warped land-use outcomes. Governing talked with her about the politics of delay, what kind of regulations hamper growth and when community meetings can still be an effective means of public feedback.

Governing: What could be wrong with a neighborhood meeting? Isn’t this democracy in its purest form? 

Katherine Levine Einstein: In this book, rather than look at things in their ideal form, we actually evaluated how they are working on the ground. We bring data to the question of whether neighborhood meetings are really providing community voice. One of the reasons that we think of them as this important cornerstone of American democracy is because they are supposedly providing us perspectives that are not widely heard, really amplifying the voices of neighborhood residents.

What we’re able to do in the book is to really bring home the idea that the people who are showing up are not actually representative of their broader communities and they are unrepresentative in really important ways. They’re much more likely to be opposed to new housing, and they’re demographically privileged on a number of dimensions….

What we find happens in practice is that even in less privileged places, these neighborhood meetings are actually amplifying more privileged voices. We study a variety of more disadvantaged places and what the dynamics of these meetings look like. The principles that hold in more affluent communities still play out in these less privileged places. You still hear from voices that are overwhelmingly opposed to new housing. The voices that are heard are much more likely to be homeowners, white and older…(More)”.

Evidence is a policymaker’s biggest weapon


Report by Jacquelyn Zhang: “Fundamentally, public policy is supposed to address serious social problems. However, poorly designed policies exist. Often this happens when a well-intentioned policy generates unexpected and unintended consequences, and sometimes, these consequences leave policymakers farther away from their goal than when they started.

Consider just a few examples.

The first is the impact of an immigration law that was used in the United States ostensibly to control the flow of undocumented immigrants into the country. The controversial bill imposes extreme restrictions on undocumented immigrants in the state of Alabama and limits every aspect of immigrants’ lives.

By employing a synthetic control methodology, the bill proved to have a substantial and negative unintended effect – an increase in violent crimes. This could be linked back to the bill because while violent crime increased, property crime did not.

This may be because the passage of one of the country’s strictest anti-immigration laws signalled to the community that the system had more tolerance for discrimination against undocumented immigrants in Alabama, fuelling distrust and eventually violent conflict.

This is not a freak event either. Policymakers know that enacting laws doesn’t just change the wording of legislation. It shapes social norms, prescribes attitudes, and affects community behaviour. Of course, this is also why good policy-making can be so productive…(More)”.

Humans in the Loop


Paper by Rebecca Crootof, Margot E. Kaminski and W. Nicholson Price II: “From lethal drones to cancer diagnostics, complex and artificially intelligent algorithms are increasingly integrated into decisionmaking that affects human lives, raising challenging questions about the proper allocation of decisional authority between humans and machines. Regulators commonly respond to these concerns by putting a “human in the loop”: using law to require or encourage including an individual within an algorithmic decisionmaking process.

Drawing on our distinctive areas of expertise with algorithmic systems, we take a bird’s eye view to make three generalizable contributions to the discourse. First, contrary to the popular narrative, the law is already profoundly (and problematically) involved in governing algorithmic systems. Law may explicitly require or prohibit human involvement and law may indirectly encourage or discourage human involvement, all without regard to what we know about the strengths and weaknesses of human and algorithmic decisionmakers and the particular quirks of hybrid human-machine systems. Second, we identify “the MABA-MABA trap,” wherein regulators are tempted to address a panoply of concerns by “slapping a human in it” based on presumptions about what humans and algorithms are respectively better at doing, often without realizing that the new hybrid system needs its own distinct regulatory interventions. Instead, we suggest that regulators should focus on what they want the human to do—what role the human is meant to play—and design regulations to allow humans to play these roles successfully. Third, borrowing concepts from systems engineering and existing law regulating railroads, nuclear reactors, and medical devices, we highlight lessons for regulating humans in the loop as well as alternative means of regulating human-machine systems going forward….(More)”.

Policy Building Blocks, And How We Talk About The Law


Article by Cathy Gellis: “One of the fundamental difficulties in doing policy advocacy, including, and perhaps especially tech policy advocacy, is that we are not only speaking of technology, which can often seem inscrutable and scary to non-experts, but law, which itself is an intricate and often opaque system. This complicated nature of our legal system can present challenges, because policy involves an application of law to technology, and we can’t apply it well when we don’t understand how the law works. (It’s also hard to do well when we don’t understand how the technology works, either, but this post is about the law part so we’ll leave the issues with understanding technology aside for now.)

Even among lawyers, who should have some expertise in understanding the law, people can find themselves at different points along the learning curve in terms of understanding the intricacies and basic mechanics of our legal system. As explained before, law is often so complex that, even as practitioners, lawyers tend to become very specialized and may lose touch with some basic concepts if they do not often encounter them in the course of their careers.

Meanwhile it shouldn’t just be lawyers who understand law anyway. Certainly policymakers, charged with making the law, should have a solid understanding what they are working with. But regular people should too. After all, the point of a democracy is that the people get to decide what their laws should be (or at least be able to charge their representatives to make good ones on their behalf). And people can’t make good choices when they don’t understand how the choices they make fit into the system they are being made for.

Remember that none of these choices are being made in a vacuum; we do not find ourselves today with a completely blank canvas. Instead, we’ve all inherited a legal system that has chugged along for two centuries. We can, of course, choose to change any of it should we so require, but such an exercise would be best served by having a solid grasp on just what it is that we would be changing. Only with that insight can we be sure that any changes we might make would be needed, appropriate, and not themselves likely to cause even more problems than whatever we were trying to fix…(More)”.

Data Types, Data Doubts & Data Trusts


Paper by João Marinotti: “Data is not monolithic. Nonetheless, the word is frequently used indiscriminately, referring to a large number of different concepts. It may refer to information writ large, or specifically to personally identifiable information, discrete digital files, trade secrets, and even to sets of AI-generated content. Yet each of these types of “data” require different governance regimes in commerce, in life, and in law. Despite this diversity, the singular concept of data trusts is promulgated as a solution to our collective data governance problems. Data trusts—meant to cover all of these types of data—are said to promote personal privacy, increase corporate transparency, facilitate the sharing of data, and even pave the way for the next generation of artificial intelligence. These anticipated benefits, however, require the body and flexibility of equitable trust law and its inherent fiduciary relationships. Unfortunately, American trust law does not allow for the existence of such general data trusts. If anything, the judicial, academic, and legislative confusion regarding data rights—or its status as property—demonstrates that discussions of data trusts may be ignoring a key element. Without first determining whether (or what kind of) data can be recognized as a trust res (i.e., as trust property) under existing law, it may be premature to accept data trusts as the private law solution to our data governance ills. If, on the other hand, the implementation of data trusts requires legislative intervention, its purported benefits must be analyzed in contrast to the myriad other new and evolving data governance frameworks that would similarly require legislation. By analyzing existing trust law and the difficulties of defining data rights, this essay highlights the urgent need to pursue doctrinally, legislatively, and technologically viable data governance strategies….(More)”.

Data Literacy for the Public Sector: Lessons from Early Pioneers in the U.S.


Paper by Nick Hart, Adita Karkera, and Valerie Logan: “Advances in the access, collection, management, analysis, and use of data across public sector organizations substantially contributed to steady improvements in services, efficiency of operations, and effectiveness of government programs. The experience of citizens, beneficiaries, managers, and data experts is also evolving as data becomes pervasive and more seamlessly integrated within decision-making processes. In order for agencies to effectively engage in the ever-changing data landscape, organizational data literacy capacity and program models can help ensure individuals across the workforce can read, write, and communicate with data in the context of their role.

Data and analytics are no longer “just” for specialists, such as data engineers and data scientists; rather, data literacy is now increasingly recognized as a core workforce competency. Fortunately, in the United States several pioneers have emerged in strategically advancing data literacy programs and activities at the organizational level, providing benefits to individuals in the public sector workforce. Pioneering programs are those that recognize data literacy as more than training. They view data literacy as a holistic set of activities program to engage employees at all levels with data, develop employees with relevant skills, and enable scale of data literacy by augmenting employees’ skills with guided learning support and resources.

Agencies should begin by crafting the case for change. As is common with any emerging field, varying definitions and interpretations of “data literacy” are prevalent, which can affect program design. Being explicit in what problems are being solved for, as well as the needs and drivers to be addressed with a data literacy program or capacity, are vital to mitigate false starts…(More)”.

How Native Americans Are Trying to Debug A.I.’s Biases


Alex V. Cipolle in The New York Times: “In September 2021, Native American technology students in high school and college gathered at a conference in Phoenix and were asked to create photo tags — word associations, essentially — for a series of images.

One image showed ceremonial sage in a seashell; another, a black-and-white photograph circa 1884, showed hundreds of Native American children lined up in uniform outside the Carlisle Indian Industrial School, one of the most prominent boarding schools run by the American government during the 19th and 20th centuries.

For the ceremonial sage, the students chose the words “sweetgrass,” “sage,” “sacred,” “medicine,” “protection” and “prayers.” They gave the photo of the boarding school tags with a different tone: “genocide,” “tragedy,” “cultural elimination,” “resiliency” and “Native children.”

The exercise was for the workshop Teaching Heritage to Artificial Intelligence Through Storytelling at the annual conference for the American Indian Science and Engineering Society. The students were creating metadata that could train a photo recognition algorithm to understand the cultural meaning of an image.

The workshop presenters — Chamisa Edmo, a technologist and citizen of the Navajo Nation, who is also Blackfeet and Shoshone-Bannock; Tracy Monteith, a senior Microsoft engineer and member of the Eastern Band of Cherokee Indians; and the journalist Davar Ardalan — then compared these answers with those produced by a major image recognition app.

For the ceremonial sage, the app’s top tag was “plant,” but other tags included “ice cream” and “dessert.” The app tagged the school image with “human,” “crowd,” “audience” and “smile” — the last a particularly odd descriptor, given that few of the children are smiling.

The image recognition app botched its task, Mr. Monteith said, because it didn’t have proper training data. Ms. Edmo explained that tagging results are often “outlandish” and “offensive,” recalling how one app identified a Native American person wearing regalia as a bird. And yet similar image recognition apps have identified with ease a St. Patrick’s Day celebration, Ms. Ardalan noted as an example, because of the abundance of data on the topic….(More)”.

The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean


OECD Report: “Governments can use artificial intelligence (AI) to design better policies and make better and more targeted decisions, enhance communication and engagement with citizens, and improve the speed and quality of public services. The Latin America and the Caribbean (LAC) region is seeking to leverage the immense potential of AI to promote the digital transformation of the public sector. The OECD, in collaboration with CAF, Development Bank of Latin America, prepared this report to help national governments in the LAC region understand the current regional baseline of activities and capacities for AI in the public sector; to identify specific approaches and actions they can take to enhance their ability to use this emerging technology for efficient, effective and responsive governments; and to collaborate across borders in pursuit of a regional vision for AI in the public sector. This report incorporates a stocktaking of each country’s strategies and commitments around AI in the public sector, including their alignment with the OECD AI Principles. It also includes an analysis of efforts to build key governance capacities and put in place critical enablers for AI in the public sector. It concludes with a series of recommendations for governments in the LAC region….(More)”.

The first answer for food insecurity: data sovereignty


Interview by Brian Oaster: “For two years now, the COVID-19 pandemic has exacerbated almost every structural inequity in Indian Country. Food insecurity is high on that list.

Like other inequities, it’s an intergenerational product of dispossession and congressional underfunding — nothing new for Native communities. What is new, however, is the ability of Native organizations and sovereign nations to collectively study and understand the needs of the many communities facing the issue. The age of data sovereignty has (finally) arrived.

To that end, the Native American Agriculture Fund (NAAF) partnered with the Indigenous Food and Agricultural Initiative (INAI) and the Food Research and Action Center (FRAC) to produce a special report, Reimagining Hunger Responses in Times of Crisis, which was released in January.

According to the report, 48% of the more than 500 Native respondents surveyed across the country agreed that “sometimes or often during the pandemic the food their household bought just didn’t last, and they didn’t have money to get more.” Food security and access were especially low among Natives with young children or elders at home, people in fair to poor health and those whose employment was disrupted by the pandemic. “Native households experience food insecurity at shockingly higher rates than the general public and white households,” the report noted.

It also detailed how, throughout the pandemic, Natives overwhelmingly turned to their tribal governments and communities — as opposed to state or federal programs — for help. State and federal programs, like the Supplement Nutrition Assistance Program, or SNAP, don’t always mesh with the needs of rural reservations. A benefits card is useless if there’s no food store in your community. In response, tribes and communities came together and worked to get their people fed.

Understanding how and why will help pave the way for legislation that empowers tribes to provide for their own people, by using federal funding to build local agricultural infrastructure, for instance, instead of relying on assistance programs that don’t always work. HCN spoke with the Native American Agriculture Fund’s CEO, Toni Stanger-McLaughlin (Colville), to find out more…(More)”.

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence


NIST Report: “As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI)….(More)”