Fixing financial data to assess systemic risk


Report by Greg Feldberg: “The COVID-19 market disruption again highlighted the flaws in the data that the public and the authorities use to assess risks in the financial system. We don’t have the right data, we can’t analyze the data we do have, and there are all sorts of holes. Amidst extreme uncertainty in times like this, market participants need better data to manage their risks, just as policymakers need better data to calibrate their crisis interventions. This paper argues that the new administration should make it a priority to fix financial regulatory data, starting during the transition.

The incoming administration should, first, emphasize data when vetting candidates for top financial regulatory positions. Every agency head should recognize the problem and the roles they must play in the solution. They should recognize how the Evidence Act of 2018 and other recent legislation help define those roles. And every agency head should recognize the role of the Office of Financial Research (OFR) within the regulatory community. Only the OFR has the mandate and experience to provide the necessary leadership to address these problems.

The incoming administration should empower the OFR to do its job and coordinate a systemwide financial data strategy, working with the regulators. That strategy should set a path for identifying key data gaps that impede risk analysis; setting data standards; sharing data securely, among authorities and with the public; and embracing new technologies that make it possible to manage data far more efficiently and securely than ever before. These are ambitious goals, but the administration may be able to accomplish them with vision and leadership…(More)”.

Public Value Science


Barry Bozeman in Issues in Science and Technology: “Why should the United States government support science? That question was apparently settled 75 years ago by Vannevar Bush in Science, the Endless Frontier: “Since health, well-being, and security are proper concerns of Government, scientific progress is, and must be, of vital interest to Government. Without scientific progress the national health would deteriorate; without scientific progress we could not hope for improvement in our standard of living or for an increased number of jobs for our citizens; and without scientific progress we could not have maintained our liberties against tyranny.”

Having dispensed with the question of why, all that remained was for policy-makers to decide, how much? Even at the dawn of modern science policy, costs and funding needs were at the center of deliberations. Though rarely discussed anymore, Endless Frontier did give specific attention to the question of how much. The proposed amounts seem, by today’s standards, modest: “It is estimated that an adequate program for Federal support of basic research in the colleges, universities, and research institutes and for financing important applied research in the public interest, will cost about 10 million dollars at the outset and may rise to about 50 million dollars annually when fully underway at the end of perhaps 5 years.”

In today’s dollars, $50 million translates to about $535 million, or less than 2% of what the federal government actually spent for basic research in 2018. One way to look at the legacy of Endless Frontier is that by answering the why question so convincingly, it logically followed that the how much question could always be answered simply by “more.”

In practice, however, the why question continues to seem so self-evident because it fails to consider a third question, who? As in, who benefits from this massive federal investment in research, and who does not? The question of who was also seemingly answered by Endless Frontier, which not only offered full employment as a major goal for expanded research but also embraced “the sound democratic principle that there should be no favored classes or special privilege.”

But I argue that this principle has now been soundly falsified. In an economic environment characterized by growth but also by extreme inequality, science and technology not only reinforce inequality but also, in some instances, help widen the gap. Science and technology can be a regressivefactor in the economy. Thus, it is time to rethink the economic equation justifying government support for science not just in terms of why and how much, but also in terms of who.

What logic supports my claim that under conditions of conspicuous inequality, science and technology research is often a regressive force? Simple: except in the case of the most basic of basic research (such as exploration of other galaxies), effects are never randomly distributed. Both the direct and indirect effects of science and technology tend to differentially affect citizens according to their socioeconomic power and purchasing power….(More)”.

Open government data, uncertainty and coronavirus: An infodemiological case study


Paper by Nikolaos Yiannakoulias, Catherine E. Slavik, Shelby L. Sturrock, J. Connor Darlington: “Governments around the world have made data on COVID-19 testing, case numbers, hospitalizations and deaths openly available, and a breadth of researchers, media sources and data scientists have curated and used these data to inform the public about the state of the coronavirus pandemic. However, it is unclear if all data being released convey anything useful beyond the reputational benefits of governments wishing to appear open and transparent. In this analysis we use Ontario, Canada as a case study to assess the value of publicly available SARS-CoV-2 positive case numbers. Using a combination of real data and simulations, we find that daily publicly available test results probably contain considerable error about individual risk (measured as proportion of tests that are positive, population based incidence and prevalence of active cases) and that short term variations are very unlikely to provide useful information for any plausible decision making on the part of individual citizens. Open government data can increase the transparency and accountability of government, however it is essential that all publication, use and re-use of these data highlight their weaknesses to ensure that the public is properly informed about the uncertainty associated with SARS-CoV-2 information….(More)”

Tackling Societal Challenges with Open Innovation


Introduction to Special Issue of California Management Review by Anita M. McGahan, Marcel L. A. M. Bogers, Henry Chesbrough, and Marcus Holgersson: “Open innovation includes external knowledge sources and paths to market as complements to internal innovation processes. Open innovation has to date been driven largely by business objectives, but the imperative of social challenges has turned attention to the broader set of goals to which open innovation is relevant. This introduction discusses how open innovation can be deployed to address societal challenges—as well as the trade-offs and tensions that arise as a result. Against this background we introduce the articles published in this Special Section, which were originally presented at the sixth Annual World Open Innovation Conference….(More)”.

Enslaved.org


About: “As of December 2020, we have built a robust, open-source architecture to discover and explore nearly a half million people records and 5 million data points. From archival fragments and spreadsheet entries, we see the lives of the enslaved in richer detail. Yet there’s much more work to do, and with the help of scholars, educators, and family historians, Enslaved.org will be rapidly expanding in 2021. We are just getting started….

In recent years, a growing number of archives, databases, and collections that organize and make sense of records of enslavement have become freely and readily accessible for scholarly and public consumption. This proliferation of projects and databases presents a number of challenges:

  • Disambiguating and merging individuals across multiple datasets is nearly impossible given their current, siloed nature;
  • Searching, browsing, and quantitative analysis across projects is extremely difficult;
  • It is often difficult to find projects and databases;
  • There are no best practices for digital data creation;
  • Many projects and datasets are in danger of going offline and disappearing.

In response to these challenges, Matrix: The Center for Digital Humanities & Social Sciences at Michigan State University (MSU), in partnership with the MSU Department of History, University of Maryland, and scholars at multiple institutions, developed Enslaved: Peoples of the Historical Slave TradeEnslaved.org’s primary focus is people—individuals who were enslaved, owned slaves, or participated in slave trading….(More)”.

Data Disappeared


Essay by Samanth Subramanian: “Whenever President Donald Trump is questioned about why the United States has nearly three times more coronavirus cases than the entire European Union, or why hundreds of Americans are still dying every day, he whips out one standard comment. We find so many cases, he contends, because we test so many people. The remark typifies Trump’s deep distrust of data: his wariness of what it will reveal, and his eagerness to distort it. In April, when he refused to allow coronavirus-stricken passengers off the Grand Princess cruise liner and onto American soil for medical treatment, he explained: “I like the numbers where they are. I don’t need to have the numbers double because of one ship.” Unable—or unwilling—to fix the problem, Trump’s instinct is to fix the numbers instead.

The administration has failed on so many different fronts in its handling of the coronavirus, creating the overall impression of sheer mayhem. But there is a common thread that runs through these government malfunctions. Precise, transparent data is crucial in the fight against a pandemic—yet through a combination of ineptness and active manipulation, the government has depleted and corrupted the key statistics that public health officials rely on to protect us.

In mid-July, just when the U.S. was breaking and rebreaking its own records for daily counts of new coronavirus cases, the Centers for Disease Control and Prevention found itself abruptly relieved of its customary duty of collating national numbers on COVID-19 patients. Instead, the Department of Health and Human Services instructed hospitals to funnel their information to the government via TeleTracking, a small Tennessee firm started by a real estate entrepreneur who has frequently donated to the Republican Party. For a while, past data disappeared from the CDC’s website entirely, and although it reappeared after an outcry, it was never updated thereafter. The TeleTracking system was riddled with errors, and the newest statistics sometimes appeared after delays. This has severely limited the ability of public health officials to determine where new clusters of COVID-19 are blooming, to notice demographic patterns in the spread of the disease, or to allocate ICU beds to those who need them most.

To make matters more confusing still, Jared Kushner moved to start a separate coronavirus surveillance system run out of the White House and built by health technology giants—burdening already-overwhelmed officials and health care experts with a needless stream of queries. Kushner’s assessments often contradicted those of agencies working on the ground. When Andrew Cuomo, New York’s governor, asked for 30,000 ventilators, Kushner claimed the state didn’t need them: “I’m doing my own projections, and I’ve gotten a lot smarter about this.”…(More)”.

Consumer Bureau To Decide Who Owns Your Financial Data


Article by Jillian S. Ambroz: “A federal agency is gearing up to make wide-ranging policy changes on consumers’ access to their financial data.

The Consumer Financial Protection Bureau (CFPB) is looking to implement the area of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act pertaining to a consumer’s rights to his or her own financial data. It is detailed in section 1033.

The agency has been laying the groundwork on this move for years, from requesting information in 2016 from financial institutions to hosting a symposium earlier this year on the problems of screen scraping, a risky but common method of collecting consumer data.

Now the agency, which was established by the Dodd-Frank Act, is asking for comments on this critical and controversial topic ahead of the proposed rulemaking. Unlike other regulations that affect single industries, this could be all-encompassing because the consumer data rule touches almost every market the agency covers, according to the story in American Banker.

The Trump administration all but ‘systematically neutered’ the agency.

With the ruling, the agency seeks to clarify its compliance expectations and help establish market practices to ensure consumers have access to consumer financial data. The agency sees an opportunity here to help shape this evolving area of financial technology, or fintech, recognizing both the opportunities and the risks to consumers as more fintechs become enmeshed with their data and day-to-day lives.

Its goal is “to better effectuate consumer access to financial records,” as stated in the regulatory filing….(More)”.

Covid-19 Data Is a Mess. We Need a Way to Make Sense of It.


Beth Blauer and Jennifer Nuzzo in the New York Times: “The United States is more than eight months into the pandemic and people are back waiting in long lines to be tested as coronavirus infections surge again. And yet there is still no federal standard to ensure testing results are being uniformly reported. Without uniform results, it is impossible to track cases accurately or respond effectively.

We test to identify coronavirus infections in communities. We can tell if we are casting a wide enough net by looking at test positivity — the percentage of people whose results are positive for the virus. The metric tells us whether we are testing enough or if the transmission of the virus is outpacing our efforts to slow it.

If the percentage of tests coming back positive is low, it gives us more confidence that we are not missing a lot of infections. It can also tell us whether a recent surge in cases may be a result of increased testing, as President Trump has asserted, or that cases are rising faster than the rate at which communities are able to test.

But to interpret these results properly, we need a national standard for how these results are reported publicly by each state. And although the Centers for Disease Control and Prevention issue protocols for how to report new cases and deaths, there is no uniform guideline for states to report testing results, which would tell us about the universe of people tested so we know we are doing enough testing to track the disease. (Even the C.D.C. was found in May to be reporting states’ results in a way that presented a misleading picture of the pandemic.)

Without a standard, states are deciding how to calculate positivity rates on their own — and their approaches are very different.

Some states include results from positive antigen-based tests, some states don’t. Some report the number of people tested, while others report only the number of tests administered, which can skew the overall results when people are tested repeatedly (as, say, at colleges and nursing homes)….(More)”

For America’s New Mayors, a Chance to Lead with Data


Article by Zachary Markovits and Molly Daniell:”While the presidential race drew much of the nation’s attention this year, voters also chose leaders in 346 mayoral elections, as well as many more city and county commission and council races, reshaping the character of government leadership from coast to coast.

These newly elected and re-elected leaders will enter office facing an unprecedented set of challenges: a worsening pandemic, weakened local economies, budget shortfalls and a reckoning over how government policies have contributed to racial injustice. To help their communities “build back better”—in the words of the new President-elect—these leaders will need not just more federal support, but also a strategy that is data-driven in order to protect their residents and ensure that resources are invested where they are needed most.

For America’s new mayors, it’s a chance to show the public what effective leadership looks like after a chaotic federal response to Covid-19—and no response can be fully effective without putting data at the center of how leaders make decisions.

Throughout 2020, we’ve been documenting the key steps that local leaders can take to advance a culture of data-informed decision-making. Here are five lessons that can help guide these new leaders as they seek to meet this moment of national crisis:

1. Articulate a vision

The voice of the chief executive is galvanizing and unlike any other in city hall. That’s why the vision for data-driven government must be articulated from the top. From the moment they are sworn in, mayors have the opportunity to lean forward and use their authority to communicate to the whole administration, council members and city employees about the shift to using data to drive policymaking.

Consider Los Angeles Mayor Eric Garcetti who, upon coming into office, spearheaded an internal review process culminating in this memo to all general managers stressing the need for a culture of both continuous learning and performance. In this memo, he creates urgency, articulates precisely what will change and how it will affect the success of the organization as well as build a data-driven culture….(More)”.

Responsible Data Re-Use for COVID19


” The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, with support from the Henry Luce Foundation, today released guidance to inform decision-making in the responsible re-use of data — re-purposing data for a use other than that for which it was originally intended — to address COVID-19. The findings, recommendations, and a new Responsible Data Re-Use framework stem from The Data Assembly initiative in New York City. An effort to solicit diverse, actionable public input on data re-use for crisis response in the United States, the Data Assembly brought together New York City-based stakeholders from government, the private sector, civic rights and advocacy organizations, and the general public to deliberate on innovative, though potentially risky, uses of data to inform crisis response in New York City. The findings and guidance from the initiative will inform policymaking and practice regarding data re-use in New York City, as well as free data literacy training offerings.

The Data Assembly’s Responsible Data Re-Use Framework provides clarity on a major element of the ongoing crisis. Though leaders throughout the world have relied on data to reduce uncertainty and make better decisions, expectations around the use and sharing of siloed data assets has remained unclear. This summer, along with the New York Public Library and Brooklyn Public Library, The GovLab co-hosted four months of remote deliberations with New York-based civil rights organizations, key data holders, and policymakers. Today’s release is a product of these discussions, to show how New Yorkers and their leaders think about the opportunities and risks involved in the data-driven response to COVID-19….(More)”

See: The Data Assembly Synthesis Report by y Andrew Young, Stefaan G. Verhulst, Nadiya Safonova, and Andrew J. Zahuranec