Facial Expressions Do Not Reveal Emotions


Lisa Feldman Barrett at Scientific American: “Do your facial movements broadcast your emotions to other people? If you think the answer is yes, think again. This question is under contentious debate. Some experts maintain that people around the world make specific, recognizable faces that express certain emotions, such as smiling in happiness, scowling in anger and gasping with widened eyes in fear. They point to hundreds of studies that appear to demonstrate that smiles, frowns, and so on are universal facial expressions of emotion. They also often cite Charles Darwin’s 1872 book The Expression of the Emotions in Man and Animals to support the claim that universal expressions evolved by natural selection.

Other scientists point to a mountain of counterevidence showing that facial movements during emotions vary too widely to be universal beacons of emotional meaning. People may smile in hatred when plotting their enemy’s downfall and scowl in delight when they hear a bad pun. In Melanesian culture, a wide-eyed gasping face is a symbol of aggression, not fear. These experts say the alleged universal expressions just represent cultural stereotypes. To be clear, both sides in the debate acknowledge that facial movements vary for a given emotion; the disagreement is about whether there is enough uniformity to detect what someone is feeling.

This debate is not just academic; the outcome has serious consequences. Today you can be turned down for a job because a so-called emotion-reading system watching you on camera applied artificial intelligence to evaluate your facial movements unfavorably during an interview. In a U.S. court of law, a judge or jury may sometimes hand down a harsher sentence, even death, if they think a defendant’s face showed a lack of remorse. Children in preschools across the country are taught to recognize smiles as happiness, scowls as anger and other expressive stereotypes from books, games and posters of disembodied faces. And for children on the autism spectrum, some of whom have difficulty perceiving emotion in others, these teachings do not translate to better communication….Emotion AI systems, therefore, do not detect emotions. They detect physical signals, such as facial muscle movements, not the psychological meaning of those signals. The conflation of movement and meaning is deeply embedded in Western culture and in science. An example is a recent high-profile study that applied machine learning to more than six million internet videos of faces. The human raters, who trained the AI system, were asked to label facial movements in the videos, but the only labels they were given to use were emotion words, such as “angry,” rather than physical descriptions, such as “scowling.” Moreover there was no objective way to confirm what, if anything, the anonymous people in the videos were feeling in those moments…(More)”.

Citizen power mobilized to fight against mosquito borne diseases


GigaBlog: “Just out in GigaByte is the latest data release from Mosquito Alert, a citizen science system for investigating and managing disease-carrying mosquitoes, and is part of our WHO-sponsored series on vector borne human diseases. Presenting 13,700 new database records in the Global Biodiversity Information Facility (GBIF) repository, all linked to photographs submitted by citizen volunteers and validated by entomological experts to determine if it provides evidence of the presence of any of the mosquito vectors of top concern in Europe. This is the latest of a new special issue of papers presenting biodiversity data for research on human diseases health, incentivising data sharing to fill important particular species and geographic gaps. As big fans of citizen science (and Mosquito Alert), its great to see this new data showcased in the series.

Vector-borne diseases account for more than 17% of all infectious diseases in humans. There are large gaps in knowledge related to these vectors, and data mobilization campaigns are required to improve data coverage to help research on vector-borne diseases and human health. As part of these efforts, GigaScience Press has partnered with the GBIF; and has been supported by TDR, the Special Programme for Research and Training in Tropical Diseases, hosted at the World Health Organization. Through this we launched this “Vectors of human disease” thematic series. Incentivising the sharing of this extremely important data, Article Processing Charges have been waived to assist with the global call for novel data. This effort has already led to the release of newly digitised location data for over 600,000 vector specimens observed across the Americas and Europe.

While paying credit to such a large number of volunteers, creating such a large public collection of validated mosquito images allows this dataset to be used to train machine-learning models for vector detection and classification. Sharing the data in this novel manner meant the authors of these papers had to set up a new credit system to evaluate contributions from multiple and diverse collaborators, which included university researchers, entomologists, and other non-academics such as independent researchers and citizen scientists. In the GigaByte paper these are acknowledged through collaborative authorship for the Mosquito Alert Digital Entomology Network and the Mosquito Alert Community…(More)”.

What makes administrative data research-ready?


Paper by Louise Mc Grath-Lone et al: “Administrative data are a valuable research resource, but are under-utilised in the UK due to governance, technical and other barriers (e.g., the time and effort taken to gain secure data access). In recent years, there has been considerable government investment in making administrative data “research-ready”, but there is no definition of what this term means. A common understanding of what constitutes research-ready administrative data is needed to establish clear principles and frameworks for their development and the realisation of their full research potential…Overall, we screened 2,375 records and identified 38 relevant studies published between 2012 and 2021. Most related to administrative data from the UK and US and particularly to health data. The term research-ready was used inconsistently in the literature and there was some conflation with the concept of data being ready for statistical analysis. From the thematic analysis, we identified five defining characteristics of research-ready administrative data: (a) accessible, (b) broad, (c) curated, (d) documented and (e) enhanced for research purposes…
Our proposed characteristics of research-ready administrative data could act as a starting point to help data owners and researchers develop common principles and standards. In the more immediate term, the proposed characteristics are a useful framework for cataloguing existing research-ready administrative databases and relevant resources that can support their development…(More)”.

Tech Inclusion for Excluded Communities


Essay by  Linda Jakob Sadeh & Smadar Nehab: “Companies often offer practical trainings to address the problem of diversity in high tech, acknowledging the disadvantages that members of excluded communities face and trying to level the playing field in terms of expertise and skills. But such trainings often fail in generating mass participation among excluded communities in tech professions. Beyond the professional knowledge and hands-on technical experience that these trainings provide, the fundamental social, ethnic, and economic barriers often remain unaddressed.

Thus, a paradoxical situation arises: On the one hand, certain communities are excluded from high tech and from the social mobility it affords. On the other hand, even when well-meaning companies wish to hire from these communities and implement diversity and inclusion measures that should make doing so possible, the pool of qualified and interested candidates often remains small. Members of the excluded communities remain discouraged from studying or training for these professions and from joining economic growth sectors, particularly high tech.

Tech Inclusion, the model we advance in this article, seeks to untangle this paradox. It takes a sincere look at the social and economic barriers that prevent excluded communities from participating in the tech industry. It suggests that the technology industry can be a driving force for inclusion if we turn the inclusion paradigm on its head, by bringing the industry to the excluded community, instead of trying to bring the excluded community to the industry, while cultivating a supportive environment for both potential candidates and firms…(More)”.

A Theory of Visionary Disruption


Paper by Joshua S. Gans: “Exploitation of disruptive technologies often requires resource deployment that creates conflict if there are divergent beliefs regarding the efficacy of a new technology. This arises when a visionary agent has more optimistic beliefs about a technological opportunity. Exploration in the form of experiments can be persuasive when beliefs differ by mitigating disagreement and its costs. This paper examines experimental choice when experiments need to persuade as well as inform. It is shown that, due to resource constraints, persuasion factors more highly for entrepreneurial than incumbent firms. However, incumbent firms, despite being able to redeploy resources using authority, are constrained in adoption as exploration cannot mitigate the costs of disagreement…(More)”.

Smart Cities and Smart Communities: Empowering Citizens through Intelligent Technologies


Book edited by Srikanta Patnaik, Siddhartha Sen, Sudeshna Ghosh: “Smart City” programs and strategies have become one of the most dominant urban agendas for local governments worldwide in the past two decades. The rapid urbanization rate and unprecedented growth of megacities in the 21st century triggered drastic changes in traditional ways of urban policy and planning, leading to an influx of digital technology applications for fast and efficient urban management. With the rising popularity in making our cities “smart”, several domains of urban management, urban infrastructure, and urban quality-of-life have seen increasing dependence on advanced information and communication technologies (ICTs) that optimize and control the day-to-day functioning of urban systems. Smart Cities, essentially, act as digital networks that obtain large-scale real-time data on urban systems, process them, and make decisions on how to manage them efficiently. The book presents 26 chapters, which are organized around five topics: (1) Conceptual framework for smart cities and communities; (2) Technical concepts and models for smart city and communities; (3) Civic engagement and citizen participation; (4) Case studies from the Global North; and (5) Case studies from the Global South…(More)”.

Collective Intelligence for Smart Cities


Book by Chun HO WU, George To Sum Ho, Fatos Xhafa, Andrew W. H. IP, Reinout Van Hille: “Collective Intelligence for Smart Cities begins with an overview of the fundamental issues and concepts of smart cities. Surveying the current state-of-the-art research in the field, the book delves deeply into key smart city developments such as health and well-being, transportation, safety, energy, environment and sustainability. In addition, the book focuses on the role of IoT cloud computing and big data, specifically in smart city development. Users will find a unique, overarching perspective that ties together these concepts based on collective intelligence, a concept for quantifying mass activity familiar to many social science and life science researchers. Sections explore how group decision-making emerges from the consensus of the collective, collaborative and competitive activities of many individuals, along with future perspectives…(More)”

The Sky’s Not The Limit: How Lower-Income Cities Can Leverage Drones


Report by UNDP: “Unmanned aerial vehicles (UAVs) are playing an important role in last-mile service delivery around the world. However, COVID-19 has highlighted a potentially broader role that UAVs could play – in cities. Higher-income cities are exploring the technology, but there is little documentation of use cases or potential initiatives in a development context. This report provides practical and applied guidance to lower-income cities looking to explore how drones can support key urban objectives…(More)”.

The Need for New Methods to Establish the Social License for Data Reuse


Stefaan G. Verhulst & Sampriti Saxena at Data & Policy: “Data has rapidly emerged as an invaluable asset in societies and economies, leading to growing demands for innovative and transformative data practices. One such practice that has received considerable attention is data reuse. Data reuse is at the forefront of an emerging “third wave of open data” (Verhulst et al., 2020). Data reuse takes place when data collected for one purpose is used subsequently for an alternative purpose, typically with the justification that such secondary use has potential positive social impact (Choo et al., 2021). Since data is considered a non-rivalrous good, it can be used an infinite number of times, each use potentially bringing new insights and solutions to public problems (OECD, 2021). Data reuse can also lead to lower project costs and more sustainable outcomes for a variety of data-enabled initiatives across sectors.

A social license, or social license to operate, captures multiple stakeholders’ acceptance of standard practices and procedures (Kenton, 2021). Stakeholders, in this context, could refer to both the public and private sector, civil society, and perhaps most importantly, the public at large. Although the term originated in the context of extractive industries, it is now applied to a much broader range of businesses including technologies like artificial intelligence (Candelon et al., 2022). As data becomes more commonly compared to exploitative practices like mining, it is only apt that we apply the concept of social licenses to the data ecosystem as well (Aitken et al., 2020).

Before exploring how to achieve social licenses for data reuse, it is important to understand the many factors that affect social licenses….(More)”.

Open data: The building block of 21st century (open) science


Paper by Corina Pascu and Jean-Claude Burgelman: “Given this irreversibility of data driven and reproducible science and the role machines will play in that, it is foreseeable that the production of scientific knowledge will be more like a constant flow of updated data driven outputs, rather than a unique publication/article of some sort. Indeed, the future of scholarly publishing will be more based on the publication of data/insights with the article as a narrative.

For open data to be valuable, reproducibility is a sine qua non (King2011; Piwowar, Vision and Whitlock2011) and—equally important as most of the societal grand challenges require several sciences to work together—essential for interdisciplinarity.

This trend correlates with the already ongoing observed epistemic shift in the rationale of science: from demonstrating the absolute truth via a unique narrative (article or publication), to the best possible understanding what at that moment is needed to move forward in the production of knowledge to address problem “X” (de Regt2017).

Science in the 21st century will be thus be more “liquid,” enabled by open science and data practices and supported or even co-produced by artificial intelligence (AI) tools and services, and thus a continuous flow of knowledge produced and used by (mainly) machines and people. In this paradigm, an article will be the “atomic” entity and often the least important output of the knowledge stream and scholarship production. Publishing will offer in the first place a platform where all parts of the knowledge stream will be made available as such via peer review.

The new frontier in open science as well as where most of future revenue will be made, will be via value added data services (such as mining, intelligence, and networking) for people and machines. The use of AI is on the rise in society, but also on all aspects of research and science: what can be put in an algorithm will be put; the machines and deep learning add factor “X.”

AI services for science 4 are already being made along the research process: data discovery and analysis and knowledge extraction out of research artefacts are accelerated with the use of AI. AI technologies also help to maximize the efficiency of the publishing process and make peer-review more objective5 (Table 1).

Table 1. Examples of AI services for science already being developed

Abbreviation: AI, artificial intelligence.

Source: Authors’ research based on public sources, 2021.

Ultimately, actionable knowledge and translation of its benefits to society will be handled by humans in the “machine era” for decades to come. But as computers are indispensable research assistants, we need to make what we publish understandable to them.

The availability of data that are “FAIR by design” and shared Application Programming Interfaces (APIs) will allow new ways of collaboration between scientists and machines to make the best use of research digital objects of any kind. The more findable, accessible, interoperable, and reusable (FAIR) data resources will become available, the more it will be possible to use AI to extract and analyze new valuable information. The main challenge is to master the interoperability and quality of research data…(More)”.