Open Access Book by Mohammad Amir Anwar and Mark Graham: “As recently as the early 2010s, there were more internet users in countries like France or Germany than in all of Africa put together. But much changed in that decade, and 2018 marked the first year in human history in which a majority of the world’s population is now connected to the internet. This mass connectivity means that we have an internet that no longer connects only the world’s wealthy. Workers from Lagos to Johannesburg to Nairobi, and everywhere in between, can now apply for and carry out jobs coming from clients who themselves can be located anywhere in the world. Digital outsourcing firms can now also set up operations in the most unlikely of places in order to tap into hitherto disconnected labour forces. With CEOs in the Global North proclaiming that location is a concern of the past, and governments and civil society in Africa promising to create millions of jobs on the continent, The Digital Continent investigates what this new world of digital work means to the lives of African workers. Anwar and Graham draw on a five-year-long field study in South Africa, Kenya, Nigeria, Ghana, and Uganda, and over 200 interviews conducted with participants including gig workers, call and contact centre workers, small self-employed freelancers, business owners, government officials, labour union officials, and industry experts. Focusing on both platform-based remote work and call and contact centre work, the book examines the job quality implications of digital work for the lives and livelihoods of African workers…(More)”.
Octagon Measurement: Public Attitudes toward AI Ethics
Paper by Yuko Ikkatai, Tilman Hartwig, Naohiro Takanashi & Hiromi M. Yokoyama: “Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be…(More)”.
Data Re-Use and Collaboration for Development
Stefaan G. Verhulst at Data & Policy: “It is often pointed out that we live in an era of unprecedented data, and that data holds great promise for development. Yet equally often overlooked is the fact that, as in so many domains, there exist tremendous inequalities and asymmetries in where this data is generated, and how it is accessed. The gap that separates high-income from low-income countries is among the most important (or at least most persistent) of these asymmetries…
Data collaboratives are an emerging form of public-private partnership that, when designed responsibly, can offer a potentially innovative solution to this problem. Data collaboratives offer at least three key benefits for developing countries:
1. Cost Efficiencies: Data and data analytic capacity are often hugely expensive and beyond the limited capacities of many low-income countries. Data reuse, facilitated by data collaboratives, can bring down the cost of data initiatives for development projects.
2. Fresh insights for better policy: Combining data from various sources by breaking down silos has the potential to lead to new and innovative insights that can help policy makers make better decisions. Digital data can also be triangulated with existing, more traditional sources of information (e.g., census data) to generate new insights and help verify the accuracy of information.
3. Overcoming inequalities and asymmetries: Social and economic inequalities, both within and among countries, are often mapped onto data inequalities. Data collaboratives can help ease some of these inequalities and asymmetries, for example by allowing costs and analytical tools and techniques to be pooled. Cloud computing, which allows information and technical tools to be easily shared and accessed, are an important example. They can play a vital role in enabling the transfer of skills and technologies between low-income and high-income countries…(More)”. See also: Reusing data responsibly to achieve development goals (OECD Report).
How digital transformation is driving economic change
Blog (and book) by Zia Qureshi: “We are living in a time of exciting technological innovations. Digital technologies are driving transformative change. Economic paradigms are shifting. The new technologies are reshaping product and factor markets and profoundly altering business and work. The latest advances in artificial intelligence and related innovations are expanding the frontiers of the digital revolution. Digital transformation is accelerating in the wake of the COVID-19 pandemic. The future is arriving faster than expected.
A recently published book, “Shifting Paradigms: Growth, Finance, Jobs, and Inequality in the Digital Economy,” examines the implications of the unfolding digital metamorphosis for economies and public policy agendas….
Firms at the technological frontier have broken away from the rest, acquiring dominance in increasingly concentrated markets and capturing the lion’s share of the returns from the new technologies. While productivity growth in these firms has been strong, it has stagnated or slowed in other firms, depressing aggregate productivity growth. Increasing automation of low- to middle-skill tasks has shifted labor demand toward higher-level skills, hurting wages and jobs at the lower end of the skill spectrum. With the new technologies favoring capital, winner-take-all business outcomes, and higher-level skills, the distribution of both capital and labor income has tended to become more unequal, and income has been shifting from labor to capital.
One important reason for these outcomes is that policies and institutions have been slow to adjust to the unfolding transformations. To realize the promise of today’s smart machines, policies need to be smarter too. They must be more responsive to change to fully capture potential gains in productivity and economic growth and address rising inequality as technological disruptions create winners and losers.
As technology reshapes markets and alters growth and distributional dynamics, policies must ensure that markets remain inclusive and support wide access to the new opportunities for firms and workers. The digital economy must be broadened to disseminate new technologies and opportunities to smaller firms and wider segments of the labor force…(More)”.
Tech is finally killing long lines
Erica Pandey at Axios: “Startups and big corporations alike are releasing technology to put long lines online.
Why it matters: Standing in lines has always been a hassle, but the pandemic has made lines longer, slower and even dangerous. Now many of those lines are going virtual.
What’s happening: Physical lines are disappearing at theme parks, doctor’s offices, clothing stores and elsewhere, replaced by systems that let you book a slot online and then wait to be notified that it’s your turn.
Whyline, an Argentinian company that was just acquired by the biometric ID company CLEAR, is an app that lets users do just that — it will keep you up to date on your wait time and let you know when you need to show up.
- Whyline’s list of clients — mostly in Latin America — includes banks, retail stores, the city of Lincoln, Nebraska, and Los Angeles International Airport.
- “The same way you make a reservation at a restaurant, Whyline software does the waiting for you in banks, in DMVs, in airports,” CLEAR CEO Caryn Seidman-Becker said on CNBC.
Another app called Safe Queue was born from the pandemic and aims to make in-store shopping safer for customers and workers by spacing out shoppers’ visits.
- The app uses GPS technology to detect when you’re within 1,000 feet of a participating store and automatically puts you in a virtual line. Then you can wait in your car or somewhere nearby until it’s your turn to shop.
Many health clinics around the country are also putting their COVID test lines online..
The rub: While virtual queuing tech may be gaining ground, lines are still more common than not. And in the age of social distancing, expect wait times to remain high and lines to remain long…(More)”.
Why people believe misinformation and resist correction
TechPolicyPress: “…In Nature, a team of nine researchers from the fields of psychology, mass media & communication have published a review of available research on the factors that lead people to “form or endorse misinformed views, and the psychological barriers” to changing their minds….
The authors summarize what is known about a variety of drivers of false beliefs, noting that they “generally arise through the same mechanisms that establish accurate beliefs” and the human weakness for trusting the “gut”. For a variety of reasons, people develop shortcuts when processing information, often defaulting to conclusions rather than evaluating new information critically. A complex set of variables related to information sources, emotional factors and a variety of other cues can lead to the formation of false beliefs. And, people often share information with little focus on its veracity, but rather to accomplish other goals- from self-promotion to signaling group membership to simply sating a desire to ‘watch the world burn’.
Barriers to belief revision are also complex, since “the original information is not simply erased or replaced” once corrective information is introduced. There is evidence that misinformation can be “reactivated and retrieved” even after an individual receives accurate information that contradicts it. A variety of factors affect whether correct information can win out. One theory looks at how information is integrated in a person’s “memory network”. Another complementary theory looks at “selective retrieval” and is backed up by neuro-imaging evidence…(More)”.
Privacy Is Power: How Tech Policy Can Bolster Democracy
Essay by Andrew Imbrie, Daniel Baer, Andrew Trask, Anna Puglisi, Erik Brattberg, and Helen Toner: “…History is rarely forgiving, but as we adopt the next phase of digital tools, policymakers can avoid the errors of the past. Privacy-enhancing technologies, or PETs, are a collection of technologies with applications ranging from improved medical diagnostics to secure voting systems and messaging platforms. PETs allow researchers to harness big data to solve problems affecting billions of people while also protecting privacy. …
PETs are ripe for coordination among democratic allies and partners, offering a way for them to jointly develop standards and practical applications that benefit the public good. At an AI summit last July, U.S. Secretary of State Antony Blinken noted the United States’ interest in “increasing access to shared public data sets for AI training and testing, while still preserving privacy,” and National Security Adviser Jake Sullivan pointed to PETs as a promising area “to overcome data privacy challenges while still delivering the value of big data.” Given China’s advantages in scale, the United States and like-minded partners should foster emerging technologies that play to their strengths in medical research and discovery, energy innovation, trade facilitation, and reform around money laundering. Driving innovation and collaboration within and across democracies is important not only because it will help ensure those societies’ success but also because there will be a first-mover advantage in the adoption of PETs for governing the world’s private data–sharing networks.
Accelerating the development of PETs for the public good will require an international approach. Democratic governments will not be the trendsetters on PETs; instead, policymakers for these governments should focus on nurturing the ecosystems these technologies need to flourish. The role for policymakers is not to decide the fate of specific protocols or techniques but rather to foster a conducive environment for researchers to experiment widely and innovate responsibly.
Democracies should identify shared priorities and promote basic research to mature the technological foundations of PETs. The underlying technologies require greater investment in algorithmic development and hardware to optimize the chips and mitigate the costs of network overhead. To support the computational requirements for PETs, for example, the National Science Foundation could create an interface through CloudBank and provide cloud compute credits to researchers without access to these resources. The United States could also help incubate an international network of research universities collaborating on these technologies.
Second, science-funding agencies in democracies should host competitions to incentivize new PETs protocols and standards—the collaboration between the United States and the United Kingdom announced in early December is a good example. The goal should be to create free, open-source protocols and avoid the fragmentation of the market and the proliferation of proprietary standards. The National Institute of Standards and Technology and other similar bodies should develop standards and measurement tools for PETs; governments and companies should form public-private partnerships to fund open-source protocols over the long term. Open-source protocols are especially important in the early days of PET development, because closed-source PET implementations by profit-seeking actors can be leveraged to build data monopolies. For example, imagine a scenario where all U.S. cancer data could be controlled by a single company because all the hospitals are running their proprietary software. And you have to become a customer to join the network…(More)”.
The Attack of Zombie Science
Article by Natalia Pasternak, Carlos Orsi, Aaron F. Mertz, & Stuart Firestein: “When we think about how science is distorted, we usually think about concepts that have ample currency in public discourse, such as pseudoscience and junk science. Practices like astrology and homeopathy come wrapped in scientific concepts and jargon that can’t meet the methodological requirements of actual sciences. During the COVID-19 pandemic, pseudoscience has had a field day. Bleach, anyone? Bear bile? Yet the pandemic has brought a newer, more subtle form of distortion to light. To the philosophy of science, we humbly submit a new concept: “zombie science.”
We think of zombie science as mindless science. It goes through the motions of scientific research without a real research question to answer, it follows all the correct methodology, but it doesn’t aspire to contribute to advance knowledge in the field. Practically all the information about hydroxychloroquine during the pandemic falls into that category, including not just the living dead found in preprint repositories, but also papers published in journals that ought to have been caught by a more discerning eye. Journals, after all, invest their reputation in every piece they choose to publish. And every investment in useless science is a net loss.
From a social and historical stance, it seems almost inevitable that the penchant for productivism in the academic and scientific world would end up encouraging zombie science. If those who do not publish perish, then publishing—even nonsense or irrelevancies—is a matter of life or death. The peer-review process and the criteria for editorial importance are filters, for sure, but they are limited. Not only do they get clogged and overwhelmed due to excess submissions, they have to deal with the weaknesses of the human condition, including feelings of personal loyalty, prejudice, and vanity. Additionally, these filters fail, as the proliferation of predatory journals shows us all too well…(More)”.
Making data for good better
Article by Caroline Buckee, Satchit Balsari, and Andrew Schroeder: “…Despite the long standing excitement about the potential for digital tools, Big Data and AI to transform our lives, these innovations–with some exceptions–have so far had little impact on the greatest public health emergency of our time.
Attempts to use digital data streams to rapidly produce public health insights that were not only relevant for local contexts in cities and countries around the world, but also available to decision makers who needed them, exposed enormous gaps across the translational pipeline. The insights from novel data streams which could help drive precise, impactful health programs, and bring effective aid to communities, found limited use among public health and emergency response systems. We share here our experience from the COVID-19 Mobility Data Network (CMDN), now Crisis Ready (crisisready.io), a global collaboration of researchers, mostly infectious disease epidemiologists and data scientists, who served as trusted intermediaries between technology companies willing to share vast amounts of digital data, and policy makers, struggling to incorporate insights from these novel data streams into their decision making. Through our experience with the Network, and using human mobility data as an illustrative example, we recognize three sets of barriers to the successful application of large digital datasets for public good.
First, in the absence of pre-established working relationships with technology companies and data brokers, the data remain primarily confined within private circuits of ownership and control. During the pandemic, data sharing agreements between large technology companies and researchers were hastily cobbled together, often without the right kind of domain expertise in the mix. Second, the lack of standardization, interoperability and information on the uncertainty and biases associated with these data, necessitated complex analytical processing by highly specialized domain experts. And finally, local public health departments, understandably unfamiliar with these novel data streams, had neither the bandwidth nor the expertise to sift noise from signal. Ultimately, most efforts did not yield consistently useful information for decision making, particularly in low resource settings, where capacity limitations in the public sector are most acute…(More)”.
What Works? Developing a global evidence base for public engagement
Report by Reema Patel and Stephen Yeo: “…the Wellcome Trust commissioned OTT Consulting to recommend the best approach for enabling public engagement communities to share and gather evidence on public engagement practice globally, and in particular to assess the suitability of an approach adapted from the UK ‘What Works Centres’. This report is the output from that commission. It draws from a desk-based literature review, workshops in India, Peru and the UK, and a series of stakeholder interviews with international organisations.
The key themes that emerged from stakeholder interviews and workshops were that, in order for evidence about public engagement to help inform and shape public engagement practice, and for public engagement to be used and deployed effectively, there has to be an approach that can: understand the audiences, broaden out how ‘evidence’ is understood and generated, think strategically about how evidence affects and informs practice and understand the complexity of the system dynamics within which public engagement (and evidence about public engagement) operates….(More)”.