Leveraging Non-Traditional Data For The Covid-19 Socioeconomic Recovery Strategy


Article by Deepali Khanna: “To this end, it is opportune to ask the following questions: Can we harness the power of data routinely collected by companies—including transportation providers, mobile network operators, social media networks and others—for the public good? Can we bridge the data gap to give governments access to data, insights and tools that can inform national and local response and recovery strategies?

There is increasing recognition that traditional and non-traditional data should be seen as complementary resources. Non-traditional data can bring significant benefits in bridging existing data gaps but must still be calibrated against benchmarks based on established traditional data sources. These traditional datasets are widely seen as reliable as they are subject to established stringent international and national standards. However, they are often limited in frequency and granularity, especially in low- and middle-income countries, given the cost and time required to collect such data. For example, official economic indicators such as GDP, household consumption and consumer confidence may be available only up to national or regional level with quarterly updates…

In the Philippines, UNDP, with support from The Rockefeller Foundation and the government of Japan, recently setup the Pintig Lab: a multidisciplinary network of data scientists, economists, epidemiologists, mathematicians and political scientists, tasked with supporting data-driven crisis response and development strategies. In early 2021, the Lab conducted a study which explored how household spending on consumer-packaged goods, or fast-moving consumer goods (FMCGs), can been used to assess the socioeconomic impact of Covid-19 and identify heterogeneities in the pace of recovery across households in the Philippines. The Philippine National Economic Development Agency is now in the process of incorporating this data for their GDP forecasting, as additional input to their predictive models for consumption. Further, this data can be combined with other non-traditional datasets such as credit card or mobile wallet transactions, and machine learning techniques for higher-frequency GDP nowcasting, to allow for more nimble and responsive economic policies that can both absorb and anticipate the shocks of crisis….(More)”.

Automation exacts a toll in inequality


Rana Foroohar at The Financial Times: “When humans compete with machines, wages go down and jobs go away. But, ultimately, new categories of better work are created. The mechanisation of agriculture in the first half of the 20th century, or advances in computing and communications technology in the 1950s and 1960s, for example, went hand in hand with strong, broadly shared economic growth in the US and other developed economies.

But, in later decades, something in this relationship began to break down. Since the 1980s, we’ve seen the robotics revolution in manufacturing; the rise of software in everything; the consumer internet and the internet of things; and the growth of artificial intelligence. But during this time trend GDP growth in the US has slowed, inequality has risen and many workers — particularly, men without college degrees — have seen their real earnings fall sharply.

Globalisation and the decline of unions have played a part. But so has technological job disruption. That issue is beginning to get serious attention in Washington. In particular, politicians and policymakers are homing in on the work of MIT professor Daron Acemoglu, whose research shows that mass automation is no longer a win-win for both capital and labour. He testified at a select committee hearing to the US House of Representatives in November that automation — the substitution of machines and algorithms for tasks previously performed by workers — is responsible for 50-70 per cent of the economic disparities experienced between 1980 and 2016.

Why is this happening? Basically, while the automation of the early 20th century and the post-1945 period “increased worker productivity in a diverse set of industries and created myriad opportunities for them”, as Acemoglu said in his testimony, “what we’ve experienced since the mid 1980s is an acceleration in automation and a very sharp deceleration in the introduction of new tasks”. Put simply, he added, “the technological portfolio of the American economy has become much less balanced, and in a way that is highly detrimental to workers and especially low-education workers.”

What’s more, some things we are automating these days aren’t so economically beneficial. Consider those annoying computerised checkout stations in drug stores and groceries that force you to self-scan your purchases. They may save retailers a bit in labour costs, but they are hardly the productivity enhancer of, say, a self-driving combine harvester. Cecilia Rouse, chair of the White House’s Council of Economic Advisers, spoke for many when she told a Council on Foreign Relations event that she’d rather “stand in line [at the pharmacy] so that someone else has a job — it may not be a great job, but it is a job — and where I actually feel like I get better assistance.”

Still, there’s no holding back technology. The question is how to make sure more workers can capture its benefits. In her “Virtual Davos” speech a couple of weeks ago, Treasury secretary Janet Yellen pointed out that recent technologically driven productivity gains might exacerbate rather than mitigate inequality. She pointed to the fact that, while the “pandemic-induced surge in telework” will ultimately raise US productivity by 2.7 per cent, the gains will accrue mostly to upper income, white-collar workers, just as online learning has been better accessed and leveraged by wealthier, white students.

Education is where the rubber meets the road in fixing technology-driven inequality. As Harvard researchers Claudia Goldin and Laurence Katz have shown, when the relationship between education and technology gains breaks down, tech-driven prosperity is no longer as widely shared. This is why the Biden administration has been pushing investments into community college, apprenticeships and worker training…(More)”.

Suicide hotline shares data with for-profit spinoff, raising ethical questions


Alexandra Levine at Politico: “Crisis Text Line is one of the world’s most prominent mental health support lines, a tech-driven nonprofit that uses big data and artificial intelligence to help people cope with traumas such as self-harm, emotional abuse and thoughts of suicide.

But the data the charity collects from its online text conversations with people in their darkest moments does not end there: The organization’s for-profit spinoff uses a sliced and repackaged version of that information to create and market customer service software.

Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

In turn, Loris has pledged to share some of its revenue with Crisis Text Line. The nonprofit also holds an ownership stake in the company, and the two entities shared the same CEO for at least a year and a half. The two call their relationship a model for how commercial enterprises can help charitable endeavors thrive…(More).”

We Still Can’t See American Slavery for What It Was


Jamelle Bouie at the New York Times: “…It is thanks to decades of painstaking, difficult work that we know a great deal about the scale of human trafficking across the Atlantic Ocean and about the people aboard each ship. Much of that research is available to the public in the form of the SlaveVoyages database. A detailed repository of information on individual ships, individual voyages and even individual people, it is a groundbreaking tool for scholars of slavery, the slave trade and the Atlantic world. And it continues to grow. Last year, the team behind SlaveVoyages introduced a new data set with information on the domestic slave trade within the United States, titled “Oceans of Kinfolk.”

The systematic effort to quantify the slave trade goes back at least as far as the 19th century…

Because of its specificity with regard to individual enslaved people, this new information is as pathbreaking for lay researchers and genealogists as it is for scholars and historians. It is also, for me, an opportunity to think about the difficult ethical questions that surround this work: How exactly do we relate to data that allows someone — anyone — to identify a specific enslaved person? How do we wield these powerful tools for quantitative analysis without abstracting the human reality away from the story? And what does it mean to study something as wicked and monstrous as the slave trade using some of the tools of the trade itself?…

“The data that we have about those ships is also kind of caught in a stranglehold of ship captains who care about some things and don’t care about others,” Jennifer Morgan said. We know what was important to them. It is the task of the historian to bring other resources to bear on this knowledge, to shed light on what the documents, and the data, might obscure.

“By merely reproducing the metrics of slave traders,” Fuentes said, “you’re not actually providing us with information about the people, the humans, who actually bore the brunt of this violence. And that’s important. It is important to humanize this history, to understand that this happened to African human beings.”

It’s here that we must engage with the question of the public. Work like the SlaveVoyages database exists in the “digital humanities,” a frequently public-facing realm of scholarship and inquiry. And within that context, an important part of respecting the humanity of the enslaved is thinking about their descendants.

“If you’re doing a digital humanities project, it exists in the world,” said Jessica Marie Johnson, an assistant professor of history at Johns Hopkins and the author of “Wicked Flesh: Black Women, Intimacy, and Freedom in the Atlantic World.” “It exists among a public that is beyond the academy and beyond Silicon Valley. And that means that there should be certain other questions that we ask, a different kind of ethics of care and a different morality that we bring to things.”…(More)”.

The UN is testing technology that processes data confidentially


The Economist: “Reasons of confidentiality mean that many medical, financial, educational and other personal records, from the analysis of which much public good could be derived, are in practice unavailable. A lot of commercial data are similarly sequestered. For example, firms have more granular and timely information on the economy than governments can obtain from surveys. But such intelligence would be useful to rivals. If companies could be certain it would remain secret, they might be more willing to make it available to officialdom.

A range of novel data-processing techniques might make such sharing possible. These so-called privacy-enhancing technologies (PETs) are still in the early stages of development. But they are about to get a boost from a project launched by the United Nations’ statistics division. The UN PETs Lab, which opened for business officially on January 25th, enables national statistics offices, academic researchers and companies to collaborate to carry out projects which will test various PETs, permitting technical and administrative hiccups to be identified and overcome.

The first such effort, which actually began last summer, before the PETs Lab’s formal inauguration, analysed import and export data from national statistical offices in America, Britain, Canada, Italy and the Netherlands, to look for anomalies. Those could be a result of fraud, of faulty record keeping or of innocuous re-exporting.

For the pilot scheme, the researchers used categories already in the public domain—in this case international trade in things such as wood pulp and clocks. They thus hoped to show that the system would work, before applying it to information where confidentiality matters.

They put several kinds of PETs through their paces. In one trial, OpenMined, a charity based in Oxford, tested a technique called secure multiparty computation (SMPC). This approach involves the data to be analysed being encrypted by their keeper and staying on the premises. The organisation running the analysis (in this case OpenMined) sends its algorithm to the keeper, who runs it on the encrypted data. That is mathematically complex, but possible. The findings are then sent back to the original inquirer…(More)”.

The West already monopolized scientific publishing. Covid made it worse.


Samanth Subramanian at Quartz: “For nearly a decade, Jorge Contreras has been railing against the broken system of scientific publishing. Academic journals are dominated by the Western scientists, who not only fill their pages but also work for institutions that can afford the hefty subscription fees to these journals. “These issues have been brewing for decades,” said Contreras, a professor at the University of Utah’s College of Law who specializes in intellectual property in the sciences. “The covid crisis has certainly exacerbated things, though.”

The coronavirus pandemic triggered a torrent of academic papers. By August 2021, at least 210,000 new papers on covid-19 had been published, according to a Royal Society study. Of the 720,000-odd authors of these papers, nearly 270,000 were from the US, the UK, Italy or Spain.

These papers carry research forward, of course—but they also advance their authors’ careers, and earn them grants and patents. But many of these papers are often based on data gathered in the global south, by scientists who perhaps don’t have the resources to expand on their research and publish. Such scientists aren’t always credited in the papers their data give rise to; to make things worse, the papers appear in journals that are out of the financial reach of these scientists and their institutes.

These imbalances have, as Contreras said, been a part of the publishing landscape for years. (And it doesn’t occur just in the sciences; economists from the US or the UK, for instance, tend to study countries where English is the most common language.) But the pace and pressures of covid-19 have rendered these iniquities especially stark.

Scientists have paid to publish their covid-19 research—sometimes as much as $5,200 per article. Subscriber-only journals maintain their high fees, running into thousands of dollars a year; in 2020, the Dutch publishing house Elsevier, which puts out journals such as Cell and Gene, reported a profit of nearly $1 billion, at a margin higher than that of Apple or Amazon. And Western scientists are pressing to keep data out of GISAID, a genome database that compels users to acknowledge or collaborate with anyone who deposits the data…(More)”

Building machines that work for everyone – how diversity of test subjects is a technology blind spot, and what to do about it


Article by Tahira Reid and James Gibert: “People interact with machines in countless ways every day. In some cases, they actively control a device, like driving a car or using an app on a smartphone. Sometimes people passively interact with a device, like being imaged by an MRI machine. And sometimes they interact with machines without consent or even knowing about the interaction, like being scanned by a law enforcement facial recognition system.

Human-Machine Interaction (HMI) is an umbrella term that describes the ways people interact with machines. HMI is a key aspect of researching, designing and building new technologies, and also studying how people use and are affected by technologies.

Researchers, especially those traditionally trained in engineering, are increasingly taking a human-centered approach when developing systems and devices. This means striving to make technology that works as expected for the people who will use it by taking into account what’s known about the people and by testing the technology with them. But even as engineering researchers increasingly prioritize these considerations, some in the field have a blind spot: diversity.

As an interdisciplinary researcher who thinks holistically about engineering and design and an expert in dynamics and smart materials with interests in policy, we have examined the lack of inclusion in technology design, the negative consequences and possible solutions….

It is possible to use a homogenous sample of people in publishing a research paper that adds to a field’s body of knowledge. And some researchers who conduct studies this way acknowledge the limitations of homogenous study populations. However, when it comes to developing systems that rely on algorithms, such oversights can cause real-world problems. Algorithms are as only as good as the data that is used to build them.

Algorithms are often based on mathematical models that capture patterns and then inform a computer about those patterns to perform a given task. Imagine an algorithm designed to detect when colors appear on a clear surface. If the set of images used to train that algorithm consists of mostly shades of red, the algorithm might not detect when a shade of blue or yellow is present…(More)”.

Counting Crimes: An Obsolete Paradigm


Paul Wormeli at The Criminologist: “To the extent that a paradigm is defined as the way we view things, the crime statistics paradigm in the United States is inadequate and requires reinvention….The statement—”not all crime is reported to the police”—lies at the very heart of why our current crime data are inherently incomplete. It is a direct reference to the fact that not all “street crime” is reported and that state and local law enforcement are not the only entities responsible for overseeing violations of societally established norms (“street crime” or otherwise). Two significant gaps exist, in that: 1) official reporting of crime from state and local law enforcement agencies cannot provide insight into unreported incidents, and 2) state and local law enforcement may not have or acknowledge jurisdiction over certain types of matters, such as cybercrime, corruption, environmental crime, or terrorism, and therefore cannot or do not report on those incidents…

All of these gaps in crime reporting mask the portrait of crime in the U.S. If there was a complete accounting of crime that could serve as the basis of policy formulation, including the distribution of federal funds to state and local agencies, there could be a substantial impact across the nation. Such a calculation would move the country toward a more rational basis for determining federal support for communities based on a comprehensive measure of community wellness.

In its deliberations, the NAS Panel recognized that it is essential to consider both the concepts of classification and the rules of counting as we seek a better and more practical path to describing crime in the U.S. and its consequences. The panel postulated that a meaningful classification of incidents found to be crimes would go beyond the traditional emphasis on street crime and include all crime categories.

The NAS study identified the missing elements of a national crime report as including more complete data on crimes involving drugrelated offenses, criminal acts where juveniles are involved, so-called white-collar crimes such as fraud and corruption, cybercrime, crime against businesses, environmental crimes, and crimes against animals. Just as one example, it is highly unlikely that we will know the full extent of fraudulent claims against all federal, state, and local governments in the face of the massive influx of funding from recent and forthcoming Congressional action.

In proposing a set of crime classifications, the NAS panel recommended 11 major categories, 5 of which are not addressed in our current crime data collection systems. While there are parallel data systems that collect some of the missing data within these five crime categories, it remains unclear which federal agency, if any, has the authority to gather the information and aggregate it to give us anywhere near a complete estimate of crime in the United States. No federal or national entity has the assignment of estimating the total amount of crime that takes place in the United States. Without such leadership, we are left with an uninformed understanding of the health and wellness of communities throughout the country…(More)”

Artificial intelligence searches for the human touch


Madhumita Murgia at the Financial Times: “For many outside the tech world, “data” means soulless numbers. Perhaps it causes their eyes to glaze over with boredom. Whereas for computer scientists, data means rows upon rows of rich raw matter, there to be manipulated.

Yet the siren call of “big data” has been more muted recently. There is a dawning recognition that, in tech such as artificial intelligence, “data” equals human beings.

AI-driven algorithms are increasingly impinging upon our everyday lives. They assist in making decisions across a spectrum that ranges from advertising products to diagnosing medical conditions. It’s already clear that the impact of such systems cannot be understood simply by examining the underlying code or even the data used to build them. We must look to people for answers as well.

Two recent studies do exactly that. The first is an Ipsos Mori survey of more than 19,000 people across 28 countries on public attitudes to AI, the second a University of Tokyo study investigating Japanese people’s views on the morals and ethics of AI usage. By inviting those with lived experiences to participate, both capture the mood among those researching the impact of artificial intelligence.

The Ipsos Mori survey found that 60 per cent of adults expect that products and services using AI will profoundly change their daily lives in the next three to five years. Latin Americans in particular think AI will trigger changes in social needs such as education and employment, while Chinese respondents were most likely to believe it would change transportation and their homes.

The geographic and demographic differences in both surveys are revealing. Globally, about half said AI technology has more benefits than drawbacks, while two-thirds felt gloomy about its impact on their individual freedom and legal rights. But figures for different countries show a significant split within this. Citizens from the “global south”, a catch-all term for non-western countries, were much more likely to “have a positive outlook on the impact of AI-powered products and services in their lives”. Large majorities in China (76 per cent) and India (68 per cent) said they trusted AI companies. In contrast, only 35 per cent in the UK, France and US expressed similar trust.

In the University of Tokyo study, researchers discovered that women, older people and those with more subject knowledge were most wary of the risks of AI, perhaps an indicator of their own experiences with these systems. The Japanese mathematician Noriko Arai has, for instance, written about sexist and gender stereotypes encoded into “female” carer and receptionist robots in Japan.

The surveys underline the importance of AI designers recognising that we don’t all belong to one homogenous population, with the same understanding of the world. But they’re less insightful about why differences exist….(More)”.

Tech is finally killing long lines


Erica Pandey at Axios: “Startups and big corporations alike are releasing technology to put long lines online.

Why it matters: Standing in lines has always been a hassle, but the pandemic has made lines longer, slower and even dangerous. Now many of those lines are going virtual.

What’s happening: Physical lines are disappearing at theme parks, doctor’s offices, clothing stores and elsewhere, replaced by systems that let you book a slot online and then wait to be notified that it’s your turn.

Whyline, an Argentinian company that was just acquired by the biometric ID company CLEAR, is an app that lets users do just that — it will keep you up to date on your wait time and let you know when you need to show up.

  • Whyline’s list of clients — mostly in Latin America — includes banks, retail stores, the city of Lincoln, Nebraska, and Los Angeles International Airport.
  • “The same way you make a reservation at a restaurant, Whyline software does the waiting for you in banks, in DMVs, in airports,” CLEAR CEO Caryn Seidman-Becker said on CNBC.

Another app called Safe Queue was born from the pandemic and aims to make in-store shopping safer for customers and workers by spacing out shoppers’ visits.

  • The app uses GPS technology to detect when you’re within 1,000 feet of a participating store and automatically puts you in a virtual line. Then you can wait in your car or somewhere nearby until it’s your turn to shop.

Many health clinics around the country are also putting their COVID test lines online..

The rub: While virtual queuing tech may be gaining ground, lines are still more common than not. And in the age of social distancing, expect wait times to remain high and lines to remain long…(More)”.