Enslaved.org


About: “As of December 2020, we have built a robust, open-source architecture to discover and explore nearly a half million people records and 5 million data points. From archival fragments and spreadsheet entries, we see the lives of the enslaved in richer detail. Yet there’s much more work to do, and with the help of scholars, educators, and family historians, Enslaved.org will be rapidly expanding in 2021. We are just getting started….

In recent years, a growing number of archives, databases, and collections that organize and make sense of records of enslavement have become freely and readily accessible for scholarly and public consumption. This proliferation of projects and databases presents a number of challenges:

  • Disambiguating and merging individuals across multiple datasets is nearly impossible given their current, siloed nature;
  • Searching, browsing, and quantitative analysis across projects is extremely difficult;
  • It is often difficult to find projects and databases;
  • There are no best practices for digital data creation;
  • Many projects and datasets are in danger of going offline and disappearing.

In response to these challenges, Matrix: The Center for Digital Humanities & Social Sciences at Michigan State University (MSU), in partnership with the MSU Department of History, University of Maryland, and scholars at multiple institutions, developed Enslaved: Peoples of the Historical Slave TradeEnslaved.org’s primary focus is people—individuals who were enslaved, owned slaves, or participated in slave trading….(More)”.

Data Disappeared


Essay by Samanth Subramanian: “Whenever President Donald Trump is questioned about why the United States has nearly three times more coronavirus cases than the entire European Union, or why hundreds of Americans are still dying every day, he whips out one standard comment. We find so many cases, he contends, because we test so many people. The remark typifies Trump’s deep distrust of data: his wariness of what it will reveal, and his eagerness to distort it. In April, when he refused to allow coronavirus-stricken passengers off the Grand Princess cruise liner and onto American soil for medical treatment, he explained: “I like the numbers where they are. I don’t need to have the numbers double because of one ship.” Unable—or unwilling—to fix the problem, Trump’s instinct is to fix the numbers instead.

The administration has failed on so many different fronts in its handling of the coronavirus, creating the overall impression of sheer mayhem. But there is a common thread that runs through these government malfunctions. Precise, transparent data is crucial in the fight against a pandemic—yet through a combination of ineptness and active manipulation, the government has depleted and corrupted the key statistics that public health officials rely on to protect us.

In mid-July, just when the U.S. was breaking and rebreaking its own records for daily counts of new coronavirus cases, the Centers for Disease Control and Prevention found itself abruptly relieved of its customary duty of collating national numbers on COVID-19 patients. Instead, the Department of Health and Human Services instructed hospitals to funnel their information to the government via TeleTracking, a small Tennessee firm started by a real estate entrepreneur who has frequently donated to the Republican Party. For a while, past data disappeared from the CDC’s website entirely, and although it reappeared after an outcry, it was never updated thereafter. The TeleTracking system was riddled with errors, and the newest statistics sometimes appeared after delays. This has severely limited the ability of public health officials to determine where new clusters of COVID-19 are blooming, to notice demographic patterns in the spread of the disease, or to allocate ICU beds to those who need them most.

To make matters more confusing still, Jared Kushner moved to start a separate coronavirus surveillance system run out of the White House and built by health technology giants—burdening already-overwhelmed officials and health care experts with a needless stream of queries. Kushner’s assessments often contradicted those of agencies working on the ground. When Andrew Cuomo, New York’s governor, asked for 30,000 ventilators, Kushner claimed the state didn’t need them: “I’m doing my own projections, and I’ve gotten a lot smarter about this.”…(More)”.

Consumer Bureau To Decide Who Owns Your Financial Data


Article by Jillian S. Ambroz: “A federal agency is gearing up to make wide-ranging policy changes on consumers’ access to their financial data.

The Consumer Financial Protection Bureau (CFPB) is looking to implement the area of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act pertaining to a consumer’s rights to his or her own financial data. It is detailed in section 1033.

The agency has been laying the groundwork on this move for years, from requesting information in 2016 from financial institutions to hosting a symposium earlier this year on the problems of screen scraping, a risky but common method of collecting consumer data.

Now the agency, which was established by the Dodd-Frank Act, is asking for comments on this critical and controversial topic ahead of the proposed rulemaking. Unlike other regulations that affect single industries, this could be all-encompassing because the consumer data rule touches almost every market the agency covers, according to the story in American Banker.

The Trump administration all but ‘systematically neutered’ the agency.

With the ruling, the agency seeks to clarify its compliance expectations and help establish market practices to ensure consumers have access to consumer financial data. The agency sees an opportunity here to help shape this evolving area of financial technology, or fintech, recognizing both the opportunities and the risks to consumers as more fintechs become enmeshed with their data and day-to-day lives.

Its goal is “to better effectuate consumer access to financial records,” as stated in the regulatory filing….(More)”.

Covid-19 Data Is a Mess. We Need a Way to Make Sense of It.


Beth Blauer and Jennifer Nuzzo in the New York Times: “The United States is more than eight months into the pandemic and people are back waiting in long lines to be tested as coronavirus infections surge again. And yet there is still no federal standard to ensure testing results are being uniformly reported. Without uniform results, it is impossible to track cases accurately or respond effectively.

We test to identify coronavirus infections in communities. We can tell if we are casting a wide enough net by looking at test positivity — the percentage of people whose results are positive for the virus. The metric tells us whether we are testing enough or if the transmission of the virus is outpacing our efforts to slow it.

If the percentage of tests coming back positive is low, it gives us more confidence that we are not missing a lot of infections. It can also tell us whether a recent surge in cases may be a result of increased testing, as President Trump has asserted, or that cases are rising faster than the rate at which communities are able to test.

But to interpret these results properly, we need a national standard for how these results are reported publicly by each state. And although the Centers for Disease Control and Prevention issue protocols for how to report new cases and deaths, there is no uniform guideline for states to report testing results, which would tell us about the universe of people tested so we know we are doing enough testing to track the disease. (Even the C.D.C. was found in May to be reporting states’ results in a way that presented a misleading picture of the pandemic.)

Without a standard, states are deciding how to calculate positivity rates on their own — and their approaches are very different.

Some states include results from positive antigen-based tests, some states don’t. Some report the number of people tested, while others report only the number of tests administered, which can skew the overall results when people are tested repeatedly (as, say, at colleges and nursing homes)….(More)”

For America’s New Mayors, a Chance to Lead with Data


Article by Zachary Markovits and Molly Daniell:”While the presidential race drew much of the nation’s attention this year, voters also chose leaders in 346 mayoral elections, as well as many more city and county commission and council races, reshaping the character of government leadership from coast to coast.

These newly elected and re-elected leaders will enter office facing an unprecedented set of challenges: a worsening pandemic, weakened local economies, budget shortfalls and a reckoning over how government policies have contributed to racial injustice. To help their communities “build back better”—in the words of the new President-elect—these leaders will need not just more federal support, but also a strategy that is data-driven in order to protect their residents and ensure that resources are invested where they are needed most.

For America’s new mayors, it’s a chance to show the public what effective leadership looks like after a chaotic federal response to Covid-19—and no response can be fully effective without putting data at the center of how leaders make decisions.

Throughout 2020, we’ve been documenting the key steps that local leaders can take to advance a culture of data-informed decision-making. Here are five lessons that can help guide these new leaders as they seek to meet this moment of national crisis:

1. Articulate a vision

The voice of the chief executive is galvanizing and unlike any other in city hall. That’s why the vision for data-driven government must be articulated from the top. From the moment they are sworn in, mayors have the opportunity to lean forward and use their authority to communicate to the whole administration, council members and city employees about the shift to using data to drive policymaking.

Consider Los Angeles Mayor Eric Garcetti who, upon coming into office, spearheaded an internal review process culminating in this memo to all general managers stressing the need for a culture of both continuous learning and performance. In this memo, he creates urgency, articulates precisely what will change and how it will affect the success of the organization as well as build a data-driven culture….(More)”.

Responsible Data Re-Use for COVID19


” The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, with support from the Henry Luce Foundation, today released guidance to inform decision-making in the responsible re-use of data — re-purposing data for a use other than that for which it was originally intended — to address COVID-19. The findings, recommendations, and a new Responsible Data Re-Use framework stem from The Data Assembly initiative in New York City. An effort to solicit diverse, actionable public input on data re-use for crisis response in the United States, the Data Assembly brought together New York City-based stakeholders from government, the private sector, civic rights and advocacy organizations, and the general public to deliberate on innovative, though potentially risky, uses of data to inform crisis response in New York City. The findings and guidance from the initiative will inform policymaking and practice regarding data re-use in New York City, as well as free data literacy training offerings.

The Data Assembly’s Responsible Data Re-Use Framework provides clarity on a major element of the ongoing crisis. Though leaders throughout the world have relied on data to reduce uncertainty and make better decisions, expectations around the use and sharing of siloed data assets has remained unclear. This summer, along with the New York Public Library and Brooklyn Public Library, The GovLab co-hosted four months of remote deliberations with New York-based civil rights organizations, key data holders, and policymakers. Today’s release is a product of these discussions, to show how New Yorkers and their leaders think about the opportunities and risks involved in the data-driven response to COVID-19….(More)”

See: The Data Assembly Synthesis Report by y Andrew Young, Stefaan G. Verhulst, Nadiya Safonova, and Andrew J. Zahuranec

Don’t Fear the Robots, and Other Lessons From a Study of the Digital Economy


Steve Lohr at the New York Times: “L. Rafael Reif, the president of Massachusetts Institute of Technology, delivered an intellectual call to arms to the university’s faculty in November 2017: Help generate insights into how advancing technology has changed and will change the work force, and what policies would create opportunity for more Americans in the digital economy.

That issue, he wrote, is the “defining challenge of our time.”

Three years later, the task force assembled to address it is publishing its wide-ranging conclusions. The 92-page report, “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” was released on Tuesday….

Here are four of the key findings in the report:

Most American workers have fared poorly.

It’s well known that those on the top rungs of the job ladder have prospered for decades while wages for average American workers have stagnated. But the M.I.T. analysis goes further. It found, for example, that real wages for men without four-year college degrees have declined 10 to 20 percent since their peak in 1980….

Robots and A.I. are not about to deliver a jobless future.

…The M.I.T. researchers concluded that the change would be more evolutionary than revolutionary. In fact, they wrote, “we anticipate that in the next two decades, industrialized countries will have more job openings than workers to fill them.”…

Worker training in America needs to match the market.

“The key ingredient for success is public-private partnerships,” said Annette Parker, president of South Central College, a community college in Minnesota, and a member of the advisory board to the M.I.T. project.

The schools, nonprofits and corporate-sponsored programs that have succeeded in lifting people into middle-class jobs all echo her point: the need to link skills training to business demand….

Workers need more power, voice and representation.The report calls for raising the minimum wage, broadening unemployment insurance and modifying labor laws to enable collective bargaining in occupations like domestic and home-care workers and freelance workers. Such representation, the report notes, could come from traditional unions or worker advocacy groups like the National Domestic Workers Alliance, Jobs With Justice and the Freelancers Union….(More)”

For the Win


Revised and Updated Book by Kevin Werbach and Dan Hunter on “The Power of Gamification and Game Thinking in Business, Education, Government, and Social Impact”: “For thousands of years, we’ve created things called games that tap the tremendous psychic power of fun. In a revised and updated edition of For the Win: The Power of Gamification and Game Thinking in Business, Education, Government, and Social Impact, authors Kevin Werbach and Dan Hunter argue that applying the lessons of gamification could change your business, the way you learn or teach, and even your life.

Werbach and Hunter explain how games can be used as a valuable tool to address serious pursuits like marketing, productivity enhancement, education, innovation, customer engagement, human resources, and sustainability. They reveal how, why, and when gamification works—and what not to do.

Discover the successes—and failures—of organizations that are using gamification:

  • How a South Korean company called Neofect is using gamification to help people recover from strokes;
  • How a tool called SuperBetter has demonstrated significant results treating depression, concussion symptoms, and the mental health harms of the COVID-19 pandemic through game thinking;
  • How the ride-hailing giant Uber once used gamification to influence their drivers to work longer hours than they otherwise wanted to, causing swift backlash.

The story of gamification isn’t fun and games by any means. It’s serious. When used carefully and thoughtfully, gamification produces great outcomes for users, in ways that are hard to replicate through other methods. Other times, companies misuse the “guided missile” of gamification to have people work and do things in ways that are against their self-interest.

This revised and updated edition incorporates the most prominent research findings to provide a comprehensive gamification playbook for the real world….(More)”.

Remaking the Commons: How Digital Tools Facilitate and Subvert the Common Good


Paper by Jessica Feldman:”This scoping paper considers how digital tools, such as ICTs and AI, have failed to contribute to the “common good” in any sustained or scalable way. This is attributed to a problem that is at once political-economic and technical.

Many digital tools’ business models are predicated on advertising: framing the user as an individual consumer-to-be-targeted, not as an organization, movement, or any sort of commons. At the level of infrastructure and hardware, the increased privatization and centralization of transmission and production leads to a dangerous bottlenecking of communication power, and to labor and production practices that are undemocratic and damaging to common resources.

These practices escalate collective action problems, pose a threat to democratic decision making, aggravate issues of economic and labor inequality, and harm the environment and health. At the same time, the growth of both AI and online community formation raise questions around the very definition of human subjectivity and modes of relationality. Based on an operational definition of the common good grounded in ethics of care, sustainability, and redistributive justice, suggestions are made for solutions and further research in the areas of participatory design, digital democracy, digital labor, and environmental sustainability….(More)”

Leveraging Open Data with a National Open Computing Strategy


Policy Brief by Lara Mangravite and John Wilbanks: “Open data mandates and investments in public data resources, such as the Human Genome Project or the U.S. National Oceanic and Atmospheric Administration Data Discovery Portal, have provided essential data sets at a scale not possible without government support. By responsibly sharing data for wide reuse, federal policy can spur innovation inside the academy and in citizen science communities. These approaches are enabled by private-sector advances in cloud computing services and the government has benefited from innovation in this domain. However, the use of commercial products to manage the storage of and access to public data resources poses several challenges.

First, too many cloud computing systems fail to properly secure data against breaches, improperly share copies of data with other vendors, or use data to add to their own secretive and proprietary models. As a result, the public does not trust technology companies to responsibly manage public data—particularly private data of individual citizens. These fears are exacerbated by the market power of the major cloud computing providers, which may limit the ability of individuals or institutions to negotiate appropriate terms. This impacts the willingness of U.S. citizens to have their personal information included within these databases.

Second, open data solutions are springing up across multiple sectors without coordination. The federal government is funding a series of independent programs that are working to solve the same problem, leading to a costly duplication of effort across programs.

Third and most importantly, the high costs of data storage, transfer, and analysis preclude many academics, scientists, and researchers from taking advantage of governmental open data resources. Cloud computing has radically lowered the costs of high-performance computing, but it is still not free. The cost of building the wrong model at the wrong time can quickly run into tens of thousands of dollars.

Scarce resources mean that many academic data scientists are unable or unwilling to spend their limited funds to reuse data in exploratory analyses outside their narrow projects. And citizen scientists must use personal funds, which are especially scarce in communities traditionally underrepresented in research. The vast majority of public data made available through existing open science policy is therefore left unused, either as reference material or as “foreground” for new hypotheses and discoveries….The Solution: Public Cloud Computing…(More)”.