QuantGov


About: “QuantGov is an open-source policy analytics platform designed to help create greater understanding and analysis of the breadth of government actions through quantifying policy text. By using the platform, researchers can quickly and effectively retrieve unique data that lies embedded in large bodies of text – data on text complexity, part of speech metrics, topic modeling, etc. …

QuantGov is a tool designed to make policy text more accessible. Think about it in terms of a hyper-powerful Google search that not only finds (1) specified content within massive quantities of text, but (2) also finds patterns and groupings and can even make predictions about what is in a document. Some recent use cases include the following:

  • Analyzing state regulatory codes and predicting which parts of those codes are related to occupational licensing….And predicting which occupation the regulation is talking about….And determining the cost to receive the license.
  • Analyzing Canadian province regulatory code while grouping individual regulations by industry-topic….And determining which Ministers are responsible for those regulations….And determining the complexity of the text for those regulation.
  • Quantifying the number of tariff exclusions that exists due to the Trade Expansion Act of 1962 and recent tariff polices….And determining which products those exclusions target.
  • Comparing the regulatory codes and content of 46 US states, 11 Canadian provinces, and 7 Australian states….While using consistent metrics that can lead to insights that provide legitimate policy improvements…(More)”.

The Poisoning of the American Mind


Book by Lawrence M. Eppard: “Humans are hard-wired to look for information that they agree with (regardless of the information’s veracity), avoid information that makes them uncomfortable (even if that information is true), and interpret information in a manner that is most favorable to their sense of self. The damage these cognitive tendencies cause to one’s perception of reality depends in part upon the information that a person surrounds himself/herself with. Unfortunately, in the U.S. today, both liberals and conservatives are regularly bombarded with misleading information as well as lies from people they believe to be trustworthy and authoritative sources. While there are several factors one could plausibly blame for this predicament, the decline in the quality of the sources of information that the right and left rely on over the last few decades plays a primary role. As a result of this decline, we are faced with an epistemic crisis that is poisoning the American mind and threatening our democracy. In his forthcoming book with Jacob L. Mackey, The Poisoning of the American Mind, Lawrence M. Eppard explores epistemic problems in both the right-wing and left-wing ideological silos in the U.S., including ideology presented as fact, misinformation, disinformation, and malinformation…(More)”.

5 Ways AI Could Shake Up Democracy


Article by Shane Snider: “Tech luminary, author and Harvard Kennedy School lecturer Bruce Schneier on Tuesday offered his take on the promises and perils of artificial intelligence in key aspects of democracy.

In just two years, generative artificial intelligence (GenAI) has sparked a race to adopt (and defend against) the technology in government and the enterprise. It seems every aspect of life will soon be impacted — if not already feeling AI’s influence. A global race to place regulatory guardrails is taking shape even as companies and governments are spending billions of dollars implementing new AI technologies.

Schneier contends that five major areas of our democracy will likely see profound changes, including politics, lawmaking, administration, the legal system, and to citizens themselves.

“I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society, not necessarily by doing new things, but mostly by doing things that already or could be done by humans, are now replacing humans … There are potential changes in four dimensions: speed, scale, scope, and sophistication.”..(More)”.

Establish Data Collaboratives To Foster Meaningful Public Involvement


Article by Gwen Ottinger: “Federal agencies are striving to expand the role of the public, including members of marginalized communities, in developing regulatory policy. At the same time, agencies are considering how to mobilize data of increasing size and complexity to ensure that policies are equitable and evidence-based. However, community engagement has rarely been extended to the process of examining and interpreting data. This is a missed opportunity: community members can offer critical context to quantitative data, ground-truth data analyses, and suggest ways of looking at data that could inform policy responses to pressing problems in their lives. Realizing this opportunity requires a structure for public participation in which community members can expect both support from agency staff in accessing and understanding data and genuine openness to new perspectives on quantitative analysis. 

To deepen community involvement in developing evidence-based policy, federal agencies should form Data Collaboratives in which staff and members of the public engage in mutual learning about available datasets and their affordances for clarifying policy problems…(More)”.

Technology and the Transformation of U.S. Foreign Policy


Speech by Antony J. Blinken: “Today’s revolutions in technology are at the heart of our competition with geopolitical rivals. They pose a real test to our security. And they also represent an engine of historic possibility – for our economies, for our democracies, for our people, for our planet.

Put another way: Security, stability, prosperity – they are no longer solely analog matters.

The test before us is whether we can harness the power of this era of disruption and channel it into greater stability, greater prosperity, greater opportunity.

President Biden is determined not just to pass this “tech test,” but to ace it.

Our ability to design, to develop, to deploy technologies will determine our capacity to shape the tech future. And naturally, operating from a position of strength better positions us to set standards and advance norms around the world.

But our advantage comes not just from our domestic strength.

It comes from our solidarity with the majority of the world that shares our vision for a vibrant, open, and secure technological future, and from an unmatched network of allies and partners with whom we can work in common cause to pass the “tech test.”

We’re committed not to “digital sovereignty” but “digital solidarity.

On May 6, the State Department unveiled the U.S. International Cyberspace and Digital Strategy, which treats digital solidarity as our North Star. Solidarity informs our approach not only to digital technologies, but to all key foundational technologies.

So what I’d like to do now is share with you five ways that we’re putting this into practice.

First, we’re harnessing technology for the betterment not just of our people and our friends, but of all humanity.

The United States believes emerging and foundational technologies can and should be used to drive development and prosperity, to promote respect for human rights, to solve shared global challenges.

Some of our strategic rivals are working toward a very different goal. They’re using digital technologies and genomic data collection to surveil their people, to repress human rights.

Pretty much everywhere I go, I hear from government officials and citizens alike about their concerns about these dystopian uses of technology. And I also hear an abiding commitment to our affirmative vision and to the embrace of technology as a pathway to modernization and opportunity.

Our job is to use diplomacy to try to grow this consensus even further – to internationalize and institutionalize our vision of “tech for good.”..(More)”.

Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges


Report by the President’s Council of Advisors on Science and Technology (PCAST): “Broadly speaking, scientific advances have historically proceeded via a combination of three paradigms: empirical studies and experimentation; scientific theory and mathematical analyses; and numerical experiments and modeling. In recent years a fourth paradigm, data-driven discovery, has emerged.

These four paradigms complement and support each other. However, all four scientific modalities experience impediments to progress. Verification of a scientific hypothesis through experimentation, careful observation, or via clinical trial can be slow and expensive. The range of candidate theories to consider can be too vast and complex for human scientists to analyze. Truly innovative new hypotheses might only be discovered by fortuitous chance, or by exceptionally insightful researchers. Numerical models can be inaccurate or require enormous amounts of computational resources. Data sets can be incomplete, biased, heterogeneous, or noisy to analyze using traditional data science methods.

AI tools have obvious applications in data-driven science, but it has also been a long-standing aspiration to use these technologies to remove, or at least reduce, many of the obstacles encountered in the other three paradigms. With the current advances in AI, this dream is on the cusp of becoming a reality: candidate solutions to scientific problems are being rapidly identified, complex simulations are being enriched, and robust new ways of analyzing data are being developed.

By combining AI with the other three research modes, the rate of scientific progress will be greatly accelerated, and researchers will be positioned to meet urgent global challenges in a timely manner. Like most technologies, AI is dual use: AI technology can facilitate both beneficial and harmful applications and can cause unintended negative consequences if deployed irresponsibly or without expert and ethical human supervision. Nevertheless, PCAST sees great potential for advances in AI to accelerate science and technology for the benefit of society and the planet. In this report, we provide a high-level vision for how AI, if used responsibly, can transform the way that science is done, expand the boundaries of human knowledge, and enable researchers to find solutions to some of society’s most pressing problems…(More)”

The Battle for Attention


Article by Nathan Heller: “…For years, we have heard a litany of reasons why our capacity to pay attention is disturbingly on the wane. Technology—the buzzing, blinking pageant on our screens and in our pockets—hounds us. Modern life, forever quicker and more scattered, drives concentration away. For just as long, concerns of this variety could be put aside. Television was described as a force against attention even in the nineteen-forties. A lot of focussed, worthwhile work has taken place since then.

But alarms of late have grown more urgent. Last year, the Organization for Economic Cooperation and Development reported a huge ten-year decline in reading, math, and science performance among fifteen-year-olds globally, a third of whom cited digital distraction as an issue. Clinical presentations of attention problems have climbed (a recent study of data from the medical-software company Epic found an over-all tripling of A.D.H.D. diagnoses between 2010 and 2022, with the steepest uptick among elementary-school-age children), and college students increasingly struggle to get through books, according to their teachers, many of whom confess to feeling the same way. Film pacing has accelerated, with the average length of a shot decreasing; in music, the mean length of top-performing pop songs declined by more than a minute between 1990 and 2020. A study conducted in 2004 by the psychologist Gloria Mark found that participants kept their attention on a single screen for an average of two and a half minutes before turning it elsewhere. These days, she writes, people can pay attention to one screen for an average of only forty-seven seconds.

“Attention as a category isn’t that salient for younger folks,” Jac Mullen, a writer and a high-school teacher in New Haven, told me recently. “It takes a lot to show that how you pay attention affects the outcome—that if you focus your attention on one thing, rather than dispersing it across many things, the one thing you think is hard will become easier—but that’s a level of instruction I often find myself giving.” It’s not the students’ fault, he thinks; multitasking and its euphemism, “time management,” have become goals across the pedagogic field. The SAT was redesigned this spring to be forty-five minutes shorter, with many reading-comprehension passages trimmed to two or three sentences. Some Ivy League professors report being counselled to switch up what they’re doing every ten minutes or so to avoid falling behind their students’ churn. What appears at first to be a crisis of attention may be a narrowing of the way we interpret its value: an emergency about where—and with what goal—we look.

“In many ways, it’s the oldest question in advertising: how to get attention,” an executive named Joanne Leong told me one afternoon, in a conference room on the thirteenth floor of the midtown office of the Dentsu agency. We were speaking about a new attention market. Slides were projected on the wall, and bits of conversation rattled like half-melted ice cubes in the corridor outside. For decades, what was going on between an advertisement and its viewers was unclear: there was no consensus about what attention was or how to quantify it. “The difference now is that there’s better tech to measure it,” Leong said…(More)”.

The limits of state AI legislation


Article by Derek Robertson: “When it comes to regulating artificial intelligence, the action right now is in the states, not Washington.

State legislatures are often, like their counterparts in Europe, contrasted favorably with Congress — willing to take action where their politically paralyzed federal counterpart can’t, or won’t. Right now, every state except Alabama and Wyoming is considering some kind of AI legislation.

But simply acting doesn’t guarantee the best outcome. And today, two consumer advocates warn in POLITICO Magazine that most, if not all, state laws are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place.

Grace Gedye, an AI-focused policy analyst at Consumer Reports, and Matt Scherer, senior policy counsel at the Center for Democracy & Technology, write in an op-ed that while the use of AI systems by employers is screaming out for regulation, many of the efforts in the states are ineffectual at best.

Under the most important state laws now in consideration, they write, “Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.”

Transparency around how and when AI systems are deployed — whether in the public or private sector — is a key concern of the growing industry’s watchdogs. The Netherlands’ tax authority infamously immiserated tens of thousands of families by accusing them falsely of child care benefits fraud after an algorithm used to detect it went awry…

One issue: a series of jargon-filled loopholes in many bill texts that says the laws only cover systems “specifically developed” to be “controlling” or “substantial” factors in decision-making.

“Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision,” they explain…(More)”

People with Lived Experience and Expertise of Homelessness and Data Decision-Making


Toolkit by HUD Exchange: “People with lived experience and expertise of homelessness (PLEE) are essential partners for Continuums of Care (CoCs). Creating community models that acknowledge and practice inclusivity, while also valuing the agency of PLEE is essential. CoCs should work together with PLEE to engage in collection, review, analyzation, and use of data to make collaborative decisions impacting their local community.

This toolkit offers suggestions on how PLEE, community partners, and CoCs can partner on data projects and additional local data decision-making efforts. It includes resources on partnership practices, compensation, and training…(More)”

What is ‘lived experience’?


Article by Patrick J Casey: “Everywhere you turn, there is talk of lived experience. But there is little consensus about what the phrase ‘lived experience’ means, where it came from, and whether it has any value. Although long used by academics, it has become ubiquitous, leaping out of the ivory tower and showing up in activism, government, consulting, as well as popular culture. The Lived Experience Leaders Movement explains that those who have lived experiences have ‘[d]irect, first-hand experience, past or present, of a social issue(s) and/or injustice(s)’. A recent brief from the US Department of Health and Human Services suggests that those who have lived experience have ‘valuable and unique expertise’ that should be consulted in policy work, since engaging those with ‘knowledge based on [their] perspective, personal identities, and history’ can ‘help break down power dynamics’ and advance equity. A search of Twitter reveals a constant stream of use, from assertions like ‘Your research doesn’t override my lived experience,’ to ‘I’m pretty sure you’re not allowed to question someone’s lived experience.’

A recurring theme is a connection between lived experience and identity. A recent nominee for the US Secretary of Labor, Julie Su, is lauded as someone who will ‘bring her lived experience as a daughter of immigrants, a woman of color, and an Asian American to the role’. The Human Rights Campaign asserts that ‘[l]aws and legislation must reflect the lived experiences of LGBTQ people’. An editorial in Nature Mental Health notes that incorporation of ‘people with lived experience’ has ‘taken on the status of a movement’ in the field.

Carried a step further, the notion of lived experience is bound up with what is often called identity politics, as when one claims to be speaking from the standpoint of an identity group – ‘in my lived experience as a…’ or, simply, ‘speaking as a…’ Here, lived experience is often invoked to establish authority and prompt deference from others since, purportedly, only members of a shared identity know what it’s like to have certain kinds of experience or to be a member of that group. Outsiders sense that they shouldn’t criticise what is said because, grounded in lived experience, ‘people’s spoken truths are, in and of themselves, truths.’ Criticism of lived experience might be taken to invalidate or dehumanise others or make them feel unsafe.

So, what is lived experience? Where did it come from? And what does it have to do with identity politics?…(More)”.