Establish Data Collaboratives To Foster Meaningful Public Involvement


Article by Gwen Ottinger: “Federal agencies are striving to expand the role of the public, including members of marginalized communities, in developing regulatory policy. At the same time, agencies are considering how to mobilize data of increasing size and complexity to ensure that policies are equitable and evidence-based. However, community engagement has rarely been extended to the process of examining and interpreting data. This is a missed opportunity: community members can offer critical context to quantitative data, ground-truth data analyses, and suggest ways of looking at data that could inform policy responses to pressing problems in their lives. Realizing this opportunity requires a structure for public participation in which community members can expect both support from agency staff in accessing and understanding data and genuine openness to new perspectives on quantitative analysis. 

To deepen community involvement in developing evidence-based policy, federal agencies should form Data Collaboratives in which staff and members of the public engage in mutual learning about available datasets and their affordances for clarifying policy problems…(More)”.

Technology and the Transformation of U.S. Foreign Policy


Speech by Antony J. Blinken: “Today’s revolutions in technology are at the heart of our competition with geopolitical rivals. They pose a real test to our security. And they also represent an engine of historic possibility – for our economies, for our democracies, for our people, for our planet.

Put another way: Security, stability, prosperity – they are no longer solely analog matters.

The test before us is whether we can harness the power of this era of disruption and channel it into greater stability, greater prosperity, greater opportunity.

President Biden is determined not just to pass this “tech test,” but to ace it.

Our ability to design, to develop, to deploy technologies will determine our capacity to shape the tech future. And naturally, operating from a position of strength better positions us to set standards and advance norms around the world.

But our advantage comes not just from our domestic strength.

It comes from our solidarity with the majority of the world that shares our vision for a vibrant, open, and secure technological future, and from an unmatched network of allies and partners with whom we can work in common cause to pass the “tech test.”

We’re committed not to “digital sovereignty” but “digital solidarity.

On May 6, the State Department unveiled the U.S. International Cyberspace and Digital Strategy, which treats digital solidarity as our North Star. Solidarity informs our approach not only to digital technologies, but to all key foundational technologies.

So what I’d like to do now is share with you five ways that we’re putting this into practice.

First, we’re harnessing technology for the betterment not just of our people and our friends, but of all humanity.

The United States believes emerging and foundational technologies can and should be used to drive development and prosperity, to promote respect for human rights, to solve shared global challenges.

Some of our strategic rivals are working toward a very different goal. They’re using digital technologies and genomic data collection to surveil their people, to repress human rights.

Pretty much everywhere I go, I hear from government officials and citizens alike about their concerns about these dystopian uses of technology. And I also hear an abiding commitment to our affirmative vision and to the embrace of technology as a pathway to modernization and opportunity.

Our job is to use diplomacy to try to grow this consensus even further – to internationalize and institutionalize our vision of “tech for good.”..(More)”.

Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges


Report by the President’s Council of Advisors on Science and Technology (PCAST): “Broadly speaking, scientific advances have historically proceeded via a combination of three paradigms: empirical studies and experimentation; scientific theory and mathematical analyses; and numerical experiments and modeling. In recent years a fourth paradigm, data-driven discovery, has emerged.

These four paradigms complement and support each other. However, all four scientific modalities experience impediments to progress. Verification of a scientific hypothesis through experimentation, careful observation, or via clinical trial can be slow and expensive. The range of candidate theories to consider can be too vast and complex for human scientists to analyze. Truly innovative new hypotheses might only be discovered by fortuitous chance, or by exceptionally insightful researchers. Numerical models can be inaccurate or require enormous amounts of computational resources. Data sets can be incomplete, biased, heterogeneous, or noisy to analyze using traditional data science methods.

AI tools have obvious applications in data-driven science, but it has also been a long-standing aspiration to use these technologies to remove, or at least reduce, many of the obstacles encountered in the other three paradigms. With the current advances in AI, this dream is on the cusp of becoming a reality: candidate solutions to scientific problems are being rapidly identified, complex simulations are being enriched, and robust new ways of analyzing data are being developed.

By combining AI with the other three research modes, the rate of scientific progress will be greatly accelerated, and researchers will be positioned to meet urgent global challenges in a timely manner. Like most technologies, AI is dual use: AI technology can facilitate both beneficial and harmful applications and can cause unintended negative consequences if deployed irresponsibly or without expert and ethical human supervision. Nevertheless, PCAST sees great potential for advances in AI to accelerate science and technology for the benefit of society and the planet. In this report, we provide a high-level vision for how AI, if used responsibly, can transform the way that science is done, expand the boundaries of human knowledge, and enable researchers to find solutions to some of society’s most pressing problems…(More)”

The Battle for Attention


Article by Nathan Heller: “…For years, we have heard a litany of reasons why our capacity to pay attention is disturbingly on the wane. Technology—the buzzing, blinking pageant on our screens and in our pockets—hounds us. Modern life, forever quicker and more scattered, drives concentration away. For just as long, concerns of this variety could be put aside. Television was described as a force against attention even in the nineteen-forties. A lot of focussed, worthwhile work has taken place since then.

But alarms of late have grown more urgent. Last year, the Organization for Economic Cooperation and Development reported a huge ten-year decline in reading, math, and science performance among fifteen-year-olds globally, a third of whom cited digital distraction as an issue. Clinical presentations of attention problems have climbed (a recent study of data from the medical-software company Epic found an over-all tripling of A.D.H.D. diagnoses between 2010 and 2022, with the steepest uptick among elementary-school-age children), and college students increasingly struggle to get through books, according to their teachers, many of whom confess to feeling the same way. Film pacing has accelerated, with the average length of a shot decreasing; in music, the mean length of top-performing pop songs declined by more than a minute between 1990 and 2020. A study conducted in 2004 by the psychologist Gloria Mark found that participants kept their attention on a single screen for an average of two and a half minutes before turning it elsewhere. These days, she writes, people can pay attention to one screen for an average of only forty-seven seconds.

“Attention as a category isn’t that salient for younger folks,” Jac Mullen, a writer and a high-school teacher in New Haven, told me recently. “It takes a lot to show that how you pay attention affects the outcome—that if you focus your attention on one thing, rather than dispersing it across many things, the one thing you think is hard will become easier—but that’s a level of instruction I often find myself giving.” It’s not the students’ fault, he thinks; multitasking and its euphemism, “time management,” have become goals across the pedagogic field. The SAT was redesigned this spring to be forty-five minutes shorter, with many reading-comprehension passages trimmed to two or three sentences. Some Ivy League professors report being counselled to switch up what they’re doing every ten minutes or so to avoid falling behind their students’ churn. What appears at first to be a crisis of attention may be a narrowing of the way we interpret its value: an emergency about where—and with what goal—we look.

“In many ways, it’s the oldest question in advertising: how to get attention,” an executive named Joanne Leong told me one afternoon, in a conference room on the thirteenth floor of the midtown office of the Dentsu agency. We were speaking about a new attention market. Slides were projected on the wall, and bits of conversation rattled like half-melted ice cubes in the corridor outside. For decades, what was going on between an advertisement and its viewers was unclear: there was no consensus about what attention was or how to quantify it. “The difference now is that there’s better tech to measure it,” Leong said…(More)”.

The limits of state AI legislation


Article by Derek Robertson: “When it comes to regulating artificial intelligence, the action right now is in the states, not Washington.

State legislatures are often, like their counterparts in Europe, contrasted favorably with Congress — willing to take action where their politically paralyzed federal counterpart can’t, or won’t. Right now, every state except Alabama and Wyoming is considering some kind of AI legislation.

But simply acting doesn’t guarantee the best outcome. And today, two consumer advocates warn in POLITICO Magazine that most, if not all, state laws are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place.

Grace Gedye, an AI-focused policy analyst at Consumer Reports, and Matt Scherer, senior policy counsel at the Center for Democracy & Technology, write in an op-ed that while the use of AI systems by employers is screaming out for regulation, many of the efforts in the states are ineffectual at best.

Under the most important state laws now in consideration, they write, “Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.”

Transparency around how and when AI systems are deployed — whether in the public or private sector — is a key concern of the growing industry’s watchdogs. The Netherlands’ tax authority infamously immiserated tens of thousands of families by accusing them falsely of child care benefits fraud after an algorithm used to detect it went awry…

One issue: a series of jargon-filled loopholes in many bill texts that says the laws only cover systems “specifically developed” to be “controlling” or “substantial” factors in decision-making.

“Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision,” they explain…(More)”

People with Lived Experience and Expertise of Homelessness and Data Decision-Making


Toolkit by HUD Exchange: “People with lived experience and expertise of homelessness (PLEE) are essential partners for Continuums of Care (CoCs). Creating community models that acknowledge and practice inclusivity, while also valuing the agency of PLEE is essential. CoCs should work together with PLEE to engage in collection, review, analyzation, and use of data to make collaborative decisions impacting their local community.

This toolkit offers suggestions on how PLEE, community partners, and CoCs can partner on data projects and additional local data decision-making efforts. It includes resources on partnership practices, compensation, and training…(More)”

What is ‘lived experience’?


Article by Patrick J Casey: “Everywhere you turn, there is talk of lived experience. But there is little consensus about what the phrase ‘lived experience’ means, where it came from, and whether it has any value. Although long used by academics, it has become ubiquitous, leaping out of the ivory tower and showing up in activism, government, consulting, as well as popular culture. The Lived Experience Leaders Movement explains that those who have lived experiences have ‘[d]irect, first-hand experience, past or present, of a social issue(s) and/or injustice(s)’. A recent brief from the US Department of Health and Human Services suggests that those who have lived experience have ‘valuable and unique expertise’ that should be consulted in policy work, since engaging those with ‘knowledge based on [their] perspective, personal identities, and history’ can ‘help break down power dynamics’ and advance equity. A search of Twitter reveals a constant stream of use, from assertions like ‘Your research doesn’t override my lived experience,’ to ‘I’m pretty sure you’re not allowed to question someone’s lived experience.’

A recurring theme is a connection between lived experience and identity. A recent nominee for the US Secretary of Labor, Julie Su, is lauded as someone who will ‘bring her lived experience as a daughter of immigrants, a woman of color, and an Asian American to the role’. The Human Rights Campaign asserts that ‘[l]aws and legislation must reflect the lived experiences of LGBTQ people’. An editorial in Nature Mental Health notes that incorporation of ‘people with lived experience’ has ‘taken on the status of a movement’ in the field.

Carried a step further, the notion of lived experience is bound up with what is often called identity politics, as when one claims to be speaking from the standpoint of an identity group – ‘in my lived experience as a…’ or, simply, ‘speaking as a…’ Here, lived experience is often invoked to establish authority and prompt deference from others since, purportedly, only members of a shared identity know what it’s like to have certain kinds of experience or to be a member of that group. Outsiders sense that they shouldn’t criticise what is said because, grounded in lived experience, ‘people’s spoken truths are, in and of themselves, truths.’ Criticism of lived experience might be taken to invalidate or dehumanise others or make them feel unsafe.

So, what is lived experience? Where did it come from? And what does it have to do with identity politics?…(More)”.

The economic research policymakers actually need


Blog by Jed Kolko: “…The structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.

The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:

  1. New measures of the economy
  2. Broad literature reviews
  3. Analyses that directly quantify or simulate policy decisions.

If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.

The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals. Here are some examples:

Technological Progress and Rent Seeking


Paper by Vincent Glode & Guillermo Ordoñez: “We model firms’ allocation of resources across surplus-creating (i.e., productive) and surplus-appropriating (i.e., rent-seeking) activities. Our model predicts that industry-wide technological advancements, such as recent progress in data collection and processing, induce a disproportionate and socially inefficient reallocation of resources toward surplus-appropriating activities. As technology improves, firms rely more on appropriation to obtain their profits, endogenously reducing the impact of technological progress on economic progress and inflating the price of the resources used for both types of activities. We apply our theoretical insights to shed light on the rise of high-frequency trading…(More)”,

The CFPB wants to rein in data brokers


Article by Gaby Del Valle: “The Consumer Financial Protection Bureau wants to propose new regulations that would require data brokers to comply with the Fair Credit Reporting Act. In a speech at the White House earlier this month, CFPB Director Rohit Chopra said the agency is looking into policies to “ensure greater accountability” for companies that buy and sell consumer data, in keeping with an executive order President Joe Biden issued in late February.

Chopra said the agency is considering proposals that would define data brokers that sell certain types of data as “consumer reporting agencies,” thereby requiring those companies to comply with the Fair Credit Reporting Act (FCRA). The statute bans sharing certain kinds of data (e.g., your credit report) with entities unless they serve a specific purpose outlined in the law (e.g., if the report is used for employment purposes or to extend a line of credit to someone).

The CFBP views the buying and selling of consumer data as a national security issue, not just a matter of privacy. Chopra mentioned three massive data breaches — the 2015 Anthem leak, the 2017 Equifax hack, and the 2018 Marriott breach — as examples of foreign adversaries illicitly obtaining Americans’ personal data. “When Americans’ health information, financial information, and even their travel whereabouts can be assembled into detailed dossiers, it’s no surprise that this raises risks when it comes to safety and security,” Chopra said. But the focus on high-profile hacks obscures a more pervasive, totally legal phenomenon: data brokers’ ability to sell detailed personal information to anyone who’s willing to pay for it…(More)”.