UK response to pandemic hampered by poor data practices


Report for the Royal Society: “The UK is well behind other countries in making use of data to have a real time understanding of the spread and economic impact of the pandemic according to Data Evaluation and Learning for Viral Epidemics (DELVE), a multi-disciplinary group convened by the Royal Society.

The report, Data Readiness: Lessons from an Emergency, highlights how data such as aggregated and anonymised mobility and payment transaction data, already gathered by companies, could be used to give a more accurate picture of the pandemic at national and local levels.  That could in turn lead to improvements in evaluation and better targeting of interventions.

Maximising the value of big data at a time of crisis requires careful cooperation across the private sector, that is already gathering these data, the public sector, which can provide a base for aggregating and overseeing the correct use of the data and researchers who have the skills to analyse it for the public good.  This work needs to be developed in accordance with data protection legislation and respect people’s concerns about data security and privacy.

The report calls on the Government to extend the powers of the Office for National Statistics to enable them to support trustworthy access to ‘happenstance’ data – data that are already gathered but not for a specific public health purpose – and for the Government to fund pathfinder projects that focus on specific policy questions such as how we nowcast economic metrics and how we better understand population movements.

Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, Senior AI Fellow at The Alan Turing Institute and an author of the report, said: “The UK has talked about making better use of data for the public good, but we have had statements of good intent, rather than action.  We need to plan better for national emergencies. We need to look at the National Risk Register through the lens of what data would help us to respond more effectively. We have to learn our lessons from experiences in this pandemic and be better prepared for future crises.  That means doing the work now to ensure that companies, the public sector and researchers have pathfinder projects up and running to share and analyse data and help the government to make better informed decisions.”  

During the pandemic, counts of the daily flow of people from one place to another between more than 3000 districts in Spain have been available at the click of a button, allowing policy makers to more effectively understand how the movement of people contributes to the spread of the virus. This was based on a collaboration between the country’s three main mobile phone operators.  In France, measuring the impact of the pandemic on consumer spending on a daily and weekly scale was possible as a result of coordinated cooperation between the country’s national interbank network. 

Professor Lawrence added: “Mobile phone companies might provide a huge amount of anonymised and aggregated data that would allow us a much greater understanding of how people move around, potentially spreading the virus as they go.  And there is a wealth of other data, such as from transport systems. The more we understand about this pandemic, the better we can tackle it. We should be able to work together, the private and the public sectors, to harness big data for massive positive social good and do that safely and responsibly.”…(More)”

Climate TRACE


About: “We exist to make meaningful climate action faster and easier by mobilizing the global tech community—harnessing satellites, artificial intelligence, and collective expertise—to track human-caused emissions to specific sources in real time—independently and publicly.

Climate TRACE aims to drive stronger decision-making on environmental policy, investment, corporate sustainability strategy, and more.

WHAT WE DO

01 Monitor human-caused GHG emissions using cutting-edge technologies such as artificial intelligence, machine learning, and satellite image processing.

02 Collaborate with data scientists and emission experts from an array of industries to bring unprecedented transparency to global pollution monitoring.

03 Partner with leaders from the private and public sectors to share valuable insights in order to drive stronger climate policy and strategy.

04 Provide the necessary tools for anyone anywhere to make better decisions to mitigate and adapt to the impacts from climate change… (More)”

Digital Democracy, Social Media and Disinformation


Book by Petros Iosifidis and Nicholas Nicoli: “Digital Democracy, Social Media and Disinformation discusses some of the political, regulatory and technological issues which arise from the increased power of internet intermediaries (such as Facebook, Twitter and YouTube) and the impact of the spread of digital disinformation, especially in the midst of a health pandemic.

The volume provides a detailed account of the main areas surrounding digital democracy, disinformation and fake news, freedom of expression and post-truth politics. It addresses the major theoretical and regulatory concepts of digital democracy and the ‘network society’ before offering potential socio-political and technological solutions to the fight against disinformation and fake news. These solutions include self-regulation, rebuttals and myth-busting, news literacy, policy recommendations, awareness and communication strategies and the potential of recent technologies such as the blockchain and public interest algorithms to counter disinformation.

After addressing what has currently been done to combat disinformation and fake news, the volume argues that digital disinformation needs to be identified as a multifaceted problem, one that requires multiple approaches to resolve. Governments, regulators, think tanks, the academy and technology providers need to take more steps to better shape the next internet with as little digital disinformation as possible by means of a regional analysis. In this context, two cases concerning Russia and Ukraine are presented regarding disinformation and the ways it was handled….(More)”

Civic Technologies: Research, Practice and Open Challenges


Paper by Pablo Aragon, Adriana Alvarado Garcia, Christopher A. Le Dantec, Claudia Flores-Saviaga, and Jorge Saldivar: “Over the last years, civic technology projects have emerged around the world to advance open government and community action. Although Computer-Supported Cooperative Work (CSCW) and Human-Computer Interaction (HCI) communities have shown a growing interest in researching issues around civic technologies, yet most research still focuses on projects from the Global North. The goal of this workshop is, therefore, to advance CSCW research by raising awareness for the ongoing challenges and open questions around civic technology by bridging the gap between researchers and practitioners from different regions.

The workshop will be organized around three central topics: (1) discuss how the local context and infrastructure affect the design, implementation, adoption, and maintenance of civic technology; (2) identify key elements of the configuration of trust among government, citizenry, and local organizations and how these elements change depending on the sociopolitical context where community engagement takes place; (3) discover what methods and strategies are best suited for conducting research on civic technologies in different contexts. These core topics will be covered across sessions that will initiate in-depth discussions and, thereby, stimulate collaboration between the CSCW research community and practitioners of civic technologies from both Global North and South….(More)”.

Open government data, uncertainty and coronavirus: An infodemiological case study


Paper by Nikolaos Yiannakoulias, Catherine E. Slavik, Shelby L. Sturrock, J. Connor Darlington: “Governments around the world have made data on COVID-19 testing, case numbers, hospitalizations and deaths openly available, and a breadth of researchers, media sources and data scientists have curated and used these data to inform the public about the state of the coronavirus pandemic. However, it is unclear if all data being released convey anything useful beyond the reputational benefits of governments wishing to appear open and transparent. In this analysis we use Ontario, Canada as a case study to assess the value of publicly available SARS-CoV-2 positive case numbers. Using a combination of real data and simulations, we find that daily publicly available test results probably contain considerable error about individual risk (measured as proportion of tests that are positive, population based incidence and prevalence of active cases) and that short term variations are very unlikely to provide useful information for any plausible decision making on the part of individual citizens. Open government data can increase the transparency and accountability of government, however it is essential that all publication, use and re-use of these data highlight their weaknesses to ensure that the public is properly informed about the uncertainty associated with SARS-CoV-2 information….(More)”

Homo informaticus


Essay by Luc de Brabandere: “The history of computer science did not begin eighty years ago with the creation of the first electronic computer. To program a computer to process information – or in other words, to simulate thought – we need to be able to understand, dismantle and disassemble thoughts. In IT-speak, in order to encrypt a thought, we must first be able to decrypt it! And this willingness to analyse thought already existed in ancient times. So the principles, laws, and concepts that underlie computer science today originated in an era when the principles of mathematics and logic each started on their own paths, around their respective iconic thinkers, such as Plato and Aristotle. Indeed, the history of computer science could be described as fulfilling the dream of bringing mathematics and logic together. This dream was highlighted for the first time during the thirteenth century by Raymond Lulle, a theologian and missionary from Majorca, but it became the dream of Gottfried Leibniz in particular. This German philosopher wondered why these two fields had evolved side by side separately since ancient times, when both seemed to strive for the same goal. Mathematicians and logicians both wish to establish undeniable truths by fighting against errors of reasoning and implementing precise laws of correct thinking. The Hungarian journalist, essayist and Nobel laureate Arthur Koestler called this shock (because it always is a shock) of an original pairing of two apparently very separate things a bisociation.

We know today that the true and the demonstrable will always remain distinct, so to that extent, logic and mathematics will always remain fundamentally irreconcilable. In this sense Leibniz’s dream will never come true. But three other bisociations, admittedly less ambitious, have proved to be very fruitful, and they structure this short history. Famous Frenchman René Descartes reconciled algebra and geometry; the British logician George Boole brought algebra and logic together; and an American engineer from MIT, Claude Shannon, bisociated binary calculation with electronic relays.

Presented as such, the history of computer science resembles an unexpected remake of Four Weddings and a Funeral! Let’s take a closer look….(More)”.

Tackling Societal Challenges with Open Innovation


Introduction to Special Issue of California Management Review by Anita M. McGahan, Marcel L. A. M. Bogers, Henry Chesbrough, and Marcus Holgersson: “Open innovation includes external knowledge sources and paths to market as complements to internal innovation processes. Open innovation has to date been driven largely by business objectives, but the imperative of social challenges has turned attention to the broader set of goals to which open innovation is relevant. This introduction discusses how open innovation can be deployed to address societal challenges—as well as the trade-offs and tensions that arise as a result. Against this background we introduce the articles published in this Special Section, which were originally presented at the sixth Annual World Open Innovation Conference….(More)”.

Enslaved.org


About: “As of December 2020, we have built a robust, open-source architecture to discover and explore nearly a half million people records and 5 million data points. From archival fragments and spreadsheet entries, we see the lives of the enslaved in richer detail. Yet there’s much more work to do, and with the help of scholars, educators, and family historians, Enslaved.org will be rapidly expanding in 2021. We are just getting started….

In recent years, a growing number of archives, databases, and collections that organize and make sense of records of enslavement have become freely and readily accessible for scholarly and public consumption. This proliferation of projects and databases presents a number of challenges:

  • Disambiguating and merging individuals across multiple datasets is nearly impossible given their current, siloed nature;
  • Searching, browsing, and quantitative analysis across projects is extremely difficult;
  • It is often difficult to find projects and databases;
  • There are no best practices for digital data creation;
  • Many projects and datasets are in danger of going offline and disappearing.

In response to these challenges, Matrix: The Center for Digital Humanities & Social Sciences at Michigan State University (MSU), in partnership with the MSU Department of History, University of Maryland, and scholars at multiple institutions, developed Enslaved: Peoples of the Historical Slave TradeEnslaved.org’s primary focus is people—individuals who were enslaved, owned slaves, or participated in slave trading….(More)”.

Review into bias in algorithmic decision-making


Report by the Center for Data Ethics and Innovation (CDEI) (UK): “Unfair biases, whether conscious or unconscious, can be a problem in many decision-making processes. This review considers the impact that an increasing use of algorithmic tools is having on bias in decision-making, the steps that are required to manage risks, and the opportunities that better use of data offers to enhance fairness. We have focused on the use of
algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making…(More)”.

Data Disappeared


Essay by Samanth Subramanian: “Whenever President Donald Trump is questioned about why the United States has nearly three times more coronavirus cases than the entire European Union, or why hundreds of Americans are still dying every day, he whips out one standard comment. We find so many cases, he contends, because we test so many people. The remark typifies Trump’s deep distrust of data: his wariness of what it will reveal, and his eagerness to distort it. In April, when he refused to allow coronavirus-stricken passengers off the Grand Princess cruise liner and onto American soil for medical treatment, he explained: “I like the numbers where they are. I don’t need to have the numbers double because of one ship.” Unable—or unwilling—to fix the problem, Trump’s instinct is to fix the numbers instead.

The administration has failed on so many different fronts in its handling of the coronavirus, creating the overall impression of sheer mayhem. But there is a common thread that runs through these government malfunctions. Precise, transparent data is crucial in the fight against a pandemic—yet through a combination of ineptness and active manipulation, the government has depleted and corrupted the key statistics that public health officials rely on to protect us.

In mid-July, just when the U.S. was breaking and rebreaking its own records for daily counts of new coronavirus cases, the Centers for Disease Control and Prevention found itself abruptly relieved of its customary duty of collating national numbers on COVID-19 patients. Instead, the Department of Health and Human Services instructed hospitals to funnel their information to the government via TeleTracking, a small Tennessee firm started by a real estate entrepreneur who has frequently donated to the Republican Party. For a while, past data disappeared from the CDC’s website entirely, and although it reappeared after an outcry, it was never updated thereafter. The TeleTracking system was riddled with errors, and the newest statistics sometimes appeared after delays. This has severely limited the ability of public health officials to determine where new clusters of COVID-19 are blooming, to notice demographic patterns in the spread of the disease, or to allocate ICU beds to those who need them most.

To make matters more confusing still, Jared Kushner moved to start a separate coronavirus surveillance system run out of the White House and built by health technology giants—burdening already-overwhelmed officials and health care experts with a needless stream of queries. Kushner’s assessments often contradicted those of agencies working on the ground. When Andrew Cuomo, New York’s governor, asked for 30,000 ventilators, Kushner claimed the state didn’t need them: “I’m doing my own projections, and I’ve gotten a lot smarter about this.”…(More)”.