The West already monopolized scientific publishing. Covid made it worse.


Samanth Subramanian at Quartz: “For nearly a decade, Jorge Contreras has been railing against the broken system of scientific publishing. Academic journals are dominated by the Western scientists, who not only fill their pages but also work for institutions that can afford the hefty subscription fees to these journals. “These issues have been brewing for decades,” said Contreras, a professor at the University of Utah’s College of Law who specializes in intellectual property in the sciences. “The covid crisis has certainly exacerbated things, though.”

The coronavirus pandemic triggered a torrent of academic papers. By August 2021, at least 210,000 new papers on covid-19 had been published, according to a Royal Society study. Of the 720,000-odd authors of these papers, nearly 270,000 were from the US, the UK, Italy or Spain.

These papers carry research forward, of course—but they also advance their authors’ careers, and earn them grants and patents. But many of these papers are often based on data gathered in the global south, by scientists who perhaps don’t have the resources to expand on their research and publish. Such scientists aren’t always credited in the papers their data give rise to; to make things worse, the papers appear in journals that are out of the financial reach of these scientists and their institutes.

These imbalances have, as Contreras said, been a part of the publishing landscape for years. (And it doesn’t occur just in the sciences; economists from the US or the UK, for instance, tend to study countries where English is the most common language.) But the pace and pressures of covid-19 have rendered these iniquities especially stark.

Scientists have paid to publish their covid-19 research—sometimes as much as $5,200 per article. Subscriber-only journals maintain their high fees, running into thousands of dollars a year; in 2020, the Dutch publishing house Elsevier, which puts out journals such as Cell and Gene, reported a profit of nearly $1 billion, at a margin higher than that of Apple or Amazon. And Western scientists are pressing to keep data out of GISAID, a genome database that compels users to acknowledge or collaborate with anyone who deposits the data…(More)”

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence


Paper by Erik Brynjolfsson: “In 1950, Alan Turing proposed an “imitation game” as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions are indistinguishable from those of a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.

But not all types of AI are human-like—in fact, many of the most powerful systems are very different from humans —and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers…(More)”

UN chief calls for action to put out ‘5-alarm global fire’


UNAffairs: “At a time when “the only certainty is more uncertainty”, countries must unite to forge a new, more hopeful and equal path, UN Secretary-General António Guterres told the General Assembly on Friday, laying out his priorities for 2022. 

“We face a five-alarm global fire that requires the full mobilization of all countries,” he said, referring to the raging COVID-19 pandemic, a morally bankrupt global financial system, the climate crisis, lawlessness in cyberspace, and diminished peace and security. 

He stressed that countries “must go into emergency mode”, and now is the time to act as the response will determine global outcomes for decades ahead…. 

Alarm four: Technology and cyberspace 

While technology offers extraordinary possibilities for humanity, Mr. Guterres warned that “growing digital chaos is benefiting the most destructive forces and denying opportunities to ordinary people.” 

He spoke of the need to both expand internet access to the nearly three billion people still offline, and to address risks such as data misuse, misinformation and cyber-crime. 

“Our personal information is being exploited to control or manipulate us, change our behaviours, violate our human rights, and undermine democratic institutions. Our choices are taken away from us without us even knowing it”, he said. 

The UN chief called for strong regulatory frameworks to change the business models of social media companies which “profit from algorithms that prioritize addiction, outrage and anxiety at the cost of public safety”. 

He has proposed the establishment of a Global Digital Compact, bringing together governments, the private sector and civil society, to agree on key principles underpinning global digital cooperation. 

Another proposal is for a Global Code of Conduct to end the infodemic and the war on science, and promote integrity in public information, including online.  

Countries are also encouraged to step up work on banning lethal autonomous weapons, or “killer robots” as headline writers may prefer, and to begin considering new governance frameworks for biotechnology and neurotechnology…(More)”.

Building machines that work for everyone – how diversity of test subjects is a technology blind spot, and what to do about it


Article by Tahira Reid and James Gibert: “People interact with machines in countless ways every day. In some cases, they actively control a device, like driving a car or using an app on a smartphone. Sometimes people passively interact with a device, like being imaged by an MRI machine. And sometimes they interact with machines without consent or even knowing about the interaction, like being scanned by a law enforcement facial recognition system.

Human-Machine Interaction (HMI) is an umbrella term that describes the ways people interact with machines. HMI is a key aspect of researching, designing and building new technologies, and also studying how people use and are affected by technologies.

Researchers, especially those traditionally trained in engineering, are increasingly taking a human-centered approach when developing systems and devices. This means striving to make technology that works as expected for the people who will use it by taking into account what’s known about the people and by testing the technology with them. But even as engineering researchers increasingly prioritize these considerations, some in the field have a blind spot: diversity.

As an interdisciplinary researcher who thinks holistically about engineering and design and an expert in dynamics and smart materials with interests in policy, we have examined the lack of inclusion in technology design, the negative consequences and possible solutions….

It is possible to use a homogenous sample of people in publishing a research paper that adds to a field’s body of knowledge. And some researchers who conduct studies this way acknowledge the limitations of homogenous study populations. However, when it comes to developing systems that rely on algorithms, such oversights can cause real-world problems. Algorithms are as only as good as the data that is used to build them.

Algorithms are often based on mathematical models that capture patterns and then inform a computer about those patterns to perform a given task. Imagine an algorithm designed to detect when colors appear on a clear surface. If the set of images used to train that algorithm consists of mostly shades of red, the algorithm might not detect when a shade of blue or yellow is present…(More)”.

Why Privacy Matters


Book by Neil Richards: “Many people tell us that privacy is dead, or that it is dying, but such talk is a dangerous fallacy. This book explains what privacy is, what privacy isn’t, and why privacy matters. Privacy is the extent to which human information is known or used, and it is fundamentally about the social power that human information provides over other people. The best way to ensure that power is checked and channeled in ways that benefit humans and their society is through rules—rules about human information. And because human information rules of some sort are inevitable, we should craft our privacy rules to promote human values. The book suggests three such values that our human information rules should promote: identity, freedom, and protection. Identity allows us to be thinking, self-defining humans; freedom lets us be citizens; while protection safeguards our roles as situated consumers and workers, allowing us, as members of society, to trust and rely on other people so that we can live our lives and hopefully build a better future together…(More)”.

Breakthrough: The Promise of Frontier Technologies for Sustainable Development


Book edited by Homi Kharas, John McArthur, and Izumi Ohno: “Looking into the future is always difficult and often problematic—but sometimes it’s useful to imagine what innovations might resolve today’s problems and make tomorrow better. In this book, 15 distinguished international experts examine how technology will affect the human condition and natural world within the next ten years. Their stories reflect major ambitions for what the future could bring and offer a glimpse into the possibilities for achieving the UN’s ambitious Sustainable Development Goals.

The authors were asked to envision future success in their respective fields, given the current state of technology and potential progress over the next decade. The central question driving their research: What are likely technological advances that could contribute  to the Sustainable Development Goals at major scale, affecting the lives of hundreds of millions of people or substantial geographies around the globe.

One overall takeaway is that gradualist approaches will not achieve those goals by 2030. Breakthroughs will be necessary in science, in the development of new products and services, and in institutional systems. Each of the experts responded with stories that reflect big ambitions for what the future may bring. Their stories are not projections or forecasts as to what will happen; they are reasoned and reasonable conjectures about what could happen. The editors’ intent is to provide a glimpse into the possibilities for the future of sustainable development.

At a time when many people worry about stalled progress on the economic, social, and environmental challenges of sustainable development, Breakthrough is a reminder that the promise of a better future is within our grasp, across a range of domains. It will interest anyone who wonders about the world’s economic, social, and environmental future…(More)”

Artificial intelligence searches for the human touch


Madhumita Murgia at the Financial Times: “For many outside the tech world, “data” means soulless numbers. Perhaps it causes their eyes to glaze over with boredom. Whereas for computer scientists, data means rows upon rows of rich raw matter, there to be manipulated.

Yet the siren call of “big data” has been more muted recently. There is a dawning recognition that, in tech such as artificial intelligence, “data” equals human beings.

AI-driven algorithms are increasingly impinging upon our everyday lives. They assist in making decisions across a spectrum that ranges from advertising products to diagnosing medical conditions. It’s already clear that the impact of such systems cannot be understood simply by examining the underlying code or even the data used to build them. We must look to people for answers as well.

Two recent studies do exactly that. The first is an Ipsos Mori survey of more than 19,000 people across 28 countries on public attitudes to AI, the second a University of Tokyo study investigating Japanese people’s views on the morals and ethics of AI usage. By inviting those with lived experiences to participate, both capture the mood among those researching the impact of artificial intelligence.

The Ipsos Mori survey found that 60 per cent of adults expect that products and services using AI will profoundly change their daily lives in the next three to five years. Latin Americans in particular think AI will trigger changes in social needs such as education and employment, while Chinese respondents were most likely to believe it would change transportation and their homes.

The geographic and demographic differences in both surveys are revealing. Globally, about half said AI technology has more benefits than drawbacks, while two-thirds felt gloomy about its impact on their individual freedom and legal rights. But figures for different countries show a significant split within this. Citizens from the “global south”, a catch-all term for non-western countries, were much more likely to “have a positive outlook on the impact of AI-powered products and services in their lives”. Large majorities in China (76 per cent) and India (68 per cent) said they trusted AI companies. In contrast, only 35 per cent in the UK, France and US expressed similar trust.

In the University of Tokyo study, researchers discovered that women, older people and those with more subject knowledge were most wary of the risks of AI, perhaps an indicator of their own experiences with these systems. The Japanese mathematician Noriko Arai has, for instance, written about sexist and gender stereotypes encoded into “female” carer and receptionist robots in Japan.

The surveys underline the importance of AI designers recognising that we don’t all belong to one homogenous population, with the same understanding of the world. But they’re less insightful about why differences exist….(More)”.

New and updated building footprints


Bing Blogs: “…The Microsoft Maps Team has been leveraging that investment to identify map features at scale and produce high-quality building footprint data sets with the overall goal to add to the OpenStreetMap and MissingMaps humanitarian efforts.

As of this post, the following locations are available and Microsoft offers access to this data under the Open Data Commons Open Database License (ODbL).

Country/RegionMillion buildings
United States of America129.6
Nigeria and Kenya50.5
South America44.5
Uganda and Tanzania17.9
Canada11.8
Australia11.3

As you might expect, the vintage of the footprints depends on the collection date of the underlying imagery. Bing Maps Imagery is a composite of multiple sources with different capture dates (ranging 2012 to 2021). To ensure we are setting the right expectation for that building, each footprint has a capture date tag associated if we could deduce the vintage of imagery used…(More)”

Octagon Measurement: Public Attitudes toward AI Ethics


Paper by Yuko Ikkatai, Tilman Hartwig, Naohiro Takanashi & Hiromi M. Yokoyama: “Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be…(More)”.

Data Re-Use and Collaboration for Development


Stefaan G. Verhulst at Data & Policy: “It is often pointed out that we live in an era of unprecedented data, and that data holds great promise for development. Yet equally often overlooked is the fact that, as in so many domains, there exist tremendous inequalities and asymmetries in where this data is generated, and how it is accessed. The gap that separates high-income from low-income countries is among the most important (or at least most persistent) of these asymmetries…

Data collaboratives are an emerging form of public-private partnership that, when designed responsibly, can offer a potentially innovative solution to this problem. Data collaboratives offer at least three key benefits for developing countries:

1. Cost Efficiencies: Data and data analytic capacity are often hugely expensive and beyond the limited capacities of many low-income countries. Data reuse, facilitated by data collaboratives, can bring down the cost of data initiatives for development projects.

2. Fresh insights for better policy: Combining data from various sources by breaking down silos has the potential to lead to new and innovative insights that can help policy makers make better decisions. Digital data can also be triangulated with existing, more traditional sources of information (e.g., census data) to generate new insights and help verify the accuracy of information.

3. Overcoming inequalities and asymmetries: Social and economic inequalities, both within and among countries, are often mapped onto data inequalities. Data collaboratives can help ease some of these inequalities and asymmetries, for example by allowing costs and analytical tools and techniques to be pooled. Cloud computing, which allows information and technical tools to be easily shared and accessed, are an important example. They can play a vital role in enabling the transfer of skills and technologies between low-income and high-income countries…(More)”. See also: Reusing data responsibly to achieve development goals (OECD Report).