Facial Expressions Do Not Reveal Emotions


Lisa Feldman Barrett at Scientific American: “Do your facial movements broadcast your emotions to other people? If you think the answer is yes, think again. This question is under contentious debate. Some experts maintain that people around the world make specific, recognizable faces that express certain emotions, such as smiling in happiness, scowling in anger and gasping with widened eyes in fear. They point to hundreds of studies that appear to demonstrate that smiles, frowns, and so on are universal facial expressions of emotion. They also often cite Charles Darwin’s 1872 book The Expression of the Emotions in Man and Animals to support the claim that universal expressions evolved by natural selection.

Other scientists point to a mountain of counterevidence showing that facial movements during emotions vary too widely to be universal beacons of emotional meaning. People may smile in hatred when plotting their enemy’s downfall and scowl in delight when they hear a bad pun. In Melanesian culture, a wide-eyed gasping face is a symbol of aggression, not fear. These experts say the alleged universal expressions just represent cultural stereotypes. To be clear, both sides in the debate acknowledge that facial movements vary for a given emotion; the disagreement is about whether there is enough uniformity to detect what someone is feeling.

This debate is not just academic; the outcome has serious consequences. Today you can be turned down for a job because a so-called emotion-reading system watching you on camera applied artificial intelligence to evaluate your facial movements unfavorably during an interview. In a U.S. court of law, a judge or jury may sometimes hand down a harsher sentence, even death, if they think a defendant’s face showed a lack of remorse. Children in preschools across the country are taught to recognize smiles as happiness, scowls as anger and other expressive stereotypes from books, games and posters of disembodied faces. And for children on the autism spectrum, some of whom have difficulty perceiving emotion in others, these teachings do not translate to better communication….Emotion AI systems, therefore, do not detect emotions. They detect physical signals, such as facial muscle movements, not the psychological meaning of those signals. The conflation of movement and meaning is deeply embedded in Western culture and in science. An example is a recent high-profile study that applied machine learning to more than six million internet videos of faces. The human raters, who trained the AI system, were asked to label facial movements in the videos, but the only labels they were given to use were emotion words, such as “angry,” rather than physical descriptions, such as “scowling.” Moreover there was no objective way to confirm what, if anything, the anonymous people in the videos were feeling in those moments…(More)”.

Barcelona bets on ‘digital twin’ as future of city planning


Article by Aitor Hernández-Morales: “In five years’ time, the structure of Europe’s cities won’t be decided in local town halls but inside a quiet 19th-century chapel in a leafy neighborhood of Barcelona.

Housed in the deconsecrated Torre Girona chapel, the MareNostrum supercomputer — one of the world’s most powerful data processors — is already busily analyzing how to improve city planning in Barcelona.

Barcelona is using data to track access to primary health care centers throughout the city | BSC

“We’re using the supercomputer to make sure the urban planning process isn’t just based on clever ideas and good intentions, but on data that allows us to anticipate its impacts and avoid the negative ones,” said Barcelona Deputy Mayor Laia Bonet, who is in charge of the city’s digital transition, climate goals and international partnerships.

As part of a pilot project launched with the Italian city of Bologna earlier this year, Barcelona has created a data-based replica of itself — a digital twin — where it can trial run potential city planning projects.

“Instead of implementing flawed policies and then have to go back and correct them, we’re saving time by making sure those decisions are right before we execute them,” said Bonet.

Although the scheme is still in its test phase, Bonet said she expects the city’s high-tech approach to urban development will soon be the norm in cities across the EU.

“Within a five-year horizon I expect to see this as a basic urban planning tool,” she said.

Looking for blindspots

Barcelona’s popular superilles, or “superblocks,” are a prime example of an urban scheme that could have benefited from data modelling in the planning stages, according to Bonet.

Since 2014 the city has been creating mini-neighborhoods where through-traffic and on-street parking is all but banned, with the goal of establishing a “network of green hubs and squares where pedestrians have priority.” The superblocks were also touted as a way to help tackle air pollution, which is directly responsible for over 1,000 deaths in Barcelona each year…(More)”.

Seeking data sovereignty, a First Nation introduces its own licence


Article by Caitrin Pilkington: “The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation, or LKFN, says it is partnering with the nearby Scotty Creek research facility, outside Fort Simpson, to introduce a new application process for researchers. 

The First Nation, which also plans to create a compendium of all research gathered on its land, says the approach will be the first of its kind in the Northwest Territories.

LKFN says the current NWT-wide licensing system will still stand but a separate system addressing specific concerns was urgently required.

In the wake of a recent review of post-secondary education in the North, changes like this are being positioned as part of a larger shift in perspective about southern research taking place in the territory. 

LKFN’s initiative was approved by its council on February 7. As of April 1, any researcher hoping to study at Scotty Creek and in LKFN territory has been required to fill out a new application form. 

“When we get permits now, we independently review them and make sure certain topics are addressed in the application, so that researchers and students understand not just Scotty Creek, but the people on the land they’re on,” said Dieter Cazon, LKFN’s manager of lands and resources….

Currently, all research licensing goes through the Aurora Research Institute. The ARI’s form covers many of the same areas as the new LKFN form, but the institute has slightly different requirements for researchers.
The ARI application form asks researchers to:

  • share how they plan to release data, to ensure confidentiality;
  • describe their methodology; and
  • indicate which communities they expect to be affected by their work.

The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation form asks researchers to:

  • explicitly declare that all raw data will be co-owned by the Łı́ı́dlı̨ı̨ Kų́ę́ First Nation;
  • disclose the specific equipment and infrastructure they plan to install on the land, lay out their demobilization plan, and note how often they will be travelling through the land for data collection; and
  • explain the steps they’ve taken to educate themselves about Łı́ı́dlı̨ı̨ Kų́ę́ First Nation customs and codes of research practice that will apply to their work with the community.

Cazon says the new approach will work in tandem with ARI’s system…(More)”.

Tech Inclusion for Excluded Communities


Essay by  Linda Jakob Sadeh & Smadar Nehab: “Companies often offer practical trainings to address the problem of diversity in high tech, acknowledging the disadvantages that members of excluded communities face and trying to level the playing field in terms of expertise and skills. But such trainings often fail in generating mass participation among excluded communities in tech professions. Beyond the professional knowledge and hands-on technical experience that these trainings provide, the fundamental social, ethnic, and economic barriers often remain unaddressed.

Thus, a paradoxical situation arises: On the one hand, certain communities are excluded from high tech and from the social mobility it affords. On the other hand, even when well-meaning companies wish to hire from these communities and implement diversity and inclusion measures that should make doing so possible, the pool of qualified and interested candidates often remains small. Members of the excluded communities remain discouraged from studying or training for these professions and from joining economic growth sectors, particularly high tech.

Tech Inclusion, the model we advance in this article, seeks to untangle this paradox. It takes a sincere look at the social and economic barriers that prevent excluded communities from participating in the tech industry. It suggests that the technology industry can be a driving force for inclusion if we turn the inclusion paradigm on its head, by bringing the industry to the excluded community, instead of trying to bring the excluded community to the industry, while cultivating a supportive environment for both potential candidates and firms…(More)”.

Magic Numbers


Essay by Alana Mohamed: “…The willingness to believe in the “algorithm” as though it were a kind of god is not entirely surprising. New technologies have long been incorporated into spiritual practices, especially during times of mass crisis. In the mid-to-late 19th century, emergent technologies from the lightbulb to the telephone called the limitations of the physical world into question. New spiritual leaders, beliefs, and full-blown religions cropped up, inspired by the invisible electric currents powering scientific developments. If we could summon light and sound by unseen forces, what other invisible specters lurked beneath the surface of everyday life?

The casualties of the U.S. Civil War gave birth to new spiritual practices, including contacting the dead through spirit photography and the telegraph dial. Practices like table rapping used fairly low-tech objects — walls, tables — as conduits to the spirit realm, where ghosts would tap out responses. The rapping noise was reminiscent of Morse code, leading to comparisons with the telegraph. In fact, in 1854, a U.S. senator campaigned for a scientific commission that would establish a “spiritual telegraph” between our world and the spiritual world. (He was unsuccessful.)

William Mumler’s practice of spirit photography is perhaps better known. Mumler claimed that he could photograph a dead relative or loved one when photographing a living subject. His most famous photograph depicts the widowed Mary Todd Lincoln with the shadowy image of her decreased husband holding her shoulder. Though widely debunked as a fraud, the practice itself continued on, even earning a book written in its defense by Sir Arthur Conan Doyle. 

Similar investigations into otherworldly communication and esoteric knowledge would be mainstreamed after World War I, bolstered by the creation of the radio and wireless telegraphy. Amid a boom in table rapping, spirit photography, and the host of usual suspects, Thomas Edison spoke openly about his hopes to create a machine, based on early gramophones, to communicate with the dead, specifically referencing the work of mediums and spiritualists. Radio, in particular, provided a new way to think about the physical and spiritual worlds, with its language of tuning in, channels, frequencies, and wavelengths still employed today…(More)”.

Facebook-owner Meta to share more political ad targeting data


Article by Elizabeth Culliford: “Facebook owner Meta Platforms Inc (FB.O) will share more data on targeting choices made by advertisers running political and social-issue ads in its public ad database, it said on Monday.

Meta said it would also include detailed targeting information for these individual ads in its “Facebook Open Research and Transparency” database used by academic researchers, in an expansion of a pilot launched last year.

“Instead of analyzing how an ad was delivered by Facebook, it’s really going and looking at an advertiser strategy for what they were trying to do,” said Jeff King, Meta’s vice president of business integrity, in a phone interview.

The social media giant has faced pressure in recent years to provide transparency around targeted advertising on its platforms, particularly around elections. In 2018, it launched a public ad library, though some researchers criticized it for glitches and a lack of detailed targeting data.Meta said the ad library will soon show a summary of targeting information for social issue, electoral or political ads run by a page….The company has run various programs with external researchers as part of its transparency efforts. Last year, it said a technical error meant flawed data had been provided to academics in its “Social Science One” project…(More)”.

The Era of Borderless Data Is Ending


David McCabe and Adam Satariano at the New York Times: “Every time we send an email, tap an Instagram ad or swipe our credit cards, we create a piece of digital data.

The information pings around the world at the speed of a click, becoming a kind of borderless currency that underpins the digital economy. Largely unregulated, the flow of bits and bytes helped fuel the rise of transnational megacompanies like Google and Amazon and reshaped global communications, commerce, entertainment and media.

Now the era of open borders for data is ending.

France, Austria, South Africa and more than 50 other countries are accelerating efforts to control the digital information produced by their citizens, government agencies and corporations. Driven by security and privacy concerns, as well as economic interests and authoritarian and nationalistic urges, governments are increasingly setting rules and standards about how data can and cannot move around the globe. The goal is to gain “digital sovereignty.”

Consider that:

  • In Washington, the Biden administration is circulating an early draft of an executive order meant to stop rivals like China from gaining access to American data.
  • In the European Union, judges and policymakers are pushing efforts to guard information generated within the 27-nation bloc, including tougher online privacy requirements and rules for artificial intelligence.
  • In India, lawmakers are moving to pass a law that would limit what data could leave the nation of almost 1.4 billion people.
  • The number of laws, regulations and government policies that require digital information to be stored in a specific country more than doubled to 144 from 2017 to 2021, according to the Information Technology and Innovation Foundation.

While countries like China have long cordoned off their digital ecosystems, the imposition of more national rules on information flows is a fundamental shift in the democratic world and alters how the internet has operated since it became widely commercialized in the 1990s.

The repercussions for business operations, privacy and how law enforcement and intelligence agencies investigate crimes and run surveillance programs are far-reaching. Microsoft, Amazon and Google are offering new services to let companies store records and information within a certain territory. And the movement of data has become part of geopolitical negotiations, including a new pact for sharing information across the Atlantic that was agreed to in principle in March…(More)”.

Digital Technology Demands A New Political Philosophy


Essay by Steven Hill: “…It’s not just that digital systems are growing more ubiquitous. They are becoming more capable. Allowing for skepticism of the hype around AI, it is unarguable that computers are increasingly able to do things that we would previously have seen as the sole province of human beings — and in some cases do them better than us. That trend is unlikely to reverse and appears to be speeding up.

The result is that increasingly capable technologies are going to be a fundamental part of 21st-century life. They mediate a growing number of our deeds, utterances and exchanges. Our access to basic social goods — credit, housing, welfare, educational opportunity, jobs — is increasingly determined by algorithms of hidden design and obscure provenance. Computer code has joined market forces, communal tradition and state coercion in the first rank of social forces. We’re in the early stages of the digital lifeworld: a delicate social system that links human beings, powerful machines and abundant data in a swirling web of great complexity.

The political implications are clear to anyone who wants to see them: those who own and control the most powerful digital technologies will increasingly write the rules of society itself. Software engineers are becoming social engineers. The digital is political….

For the last few decades, digital technology has not only been developed, but also regulated, within the same intellectual paradigm: that of market individualism. Within this paradigm, the market is seen not only as a productive source of innovation, but as a reliable regulator of market participants too: a self-correcting ecosystem which can be trusted to contain the worst excesses of its participants.

“The question is not whether Musk or Zuckerberg will make the ‘right’ decision with the power at their disposal — it’s why they are allowed that power at all.”

This way of thinking about technology emphasizes consumer choice (even when that choice is illusory), hostility to government power (but ambivalence about corporate power), and individual responsibility (even at the expense of collective wellbeing). In short, it treats digital technology as a chiefly economic phenomenon to be governed by the rules and norms of the marketplace, and not as a political phenomenon to be governed by the rules and norms of the forum.

The first step in becoming a digital republican is recognizing that this tension — between economics and politics, between capitalism and democracy — is likely to be among the foremost political battlegrounds of the digital age. The second step is to argue that the balance has swung too far to one side, and it is overdue for a correction….(More)”.

Artificial intelligence is breaking patent law


Article by Alexandra George & Toby Walsh: “In 2020, a machine-learning algorithm helped researchers to develop a potent antibiotic that works against many pathogens (see Nature https://doi.org/ggm2p4; 2020). Artificial intelligence (AI) is also being used to aid vaccine development, drug design, materials discovery, space technology and ship design. Within a few years, numerous inventions could involve AI. This is creating one of the biggest threats patent systems have faced.

Patent law is based on the assumption that inventors are human; it currently struggles to deal with an inventor that is a machine. Courts around the world are wrestling with this problem now as patent applications naming an AI system as the inventor have been lodged in more than 100 countries1. Several groups are conducting public consultations on AI and intellectual property (IP) law, including in the United States, United Kingdom and Europe.

If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge. Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.

Rather than forcing old patent laws to accommodate new technology, we propose that national governments design bespoke IP law — AI-IP — that protects AI-generated inventions. Nations should also create an international treaty to ensure that these laws follow standardized principles, and that any disputes can be resolved efficiently. Researchers need to inform both steps….(More)”.

We Need to Take Back Our Privacy


Zeynep Tufekci in The New York Times: “…Congress, and states, should restrict or ban the collection of many types of data, especially those used solely for tracking, and limit how long data can be retained for necessary functions — like getting directions on a phone.

Selling, trading and merging personal data should be restricted or outlawed. Law enforcement could obtain it subject to specific judicial oversight.

Researchers have been inventing privacy-preserving methods for analyzing data sets when merging them is in the public interest but the underlying data is sensitive — as when health officials are tracking a disease outbreak and want to merge data from multiple hospitals. These techniques allow computation but make it hard, if not impossible, to identify individual records. Companies are unlikely to invest in such methods, or use end-to-end encryption as appropriate to protect user data, if they could continue doing whatever they want. Regulation could make these advancements good business opportunities, and spur innovation.

I don’t think people like things the way they are. When Apple changed a default option from “track me” to “do not track me” on its phones, few people chose to be tracked. And many who accept tracking probably don’t realize how much privacy they’re giving up, and what this kind of data can reveal. Many location collectors get their data from ordinary apps — could be weather, games, or anything else — that often bury that they will share the data with others in vague terms deep in their fine print.

Under these conditions, requiring people to click “I accept” to lengthy legalese for access to functions that have become integral to modern life is a masquerade, not informed consent.

Many politicians have been reluctant to act. The tech industry is generous, cozy with power, and politicians themselves use data analysis for their campaigns. This is all the more reason to press them to move forward…(More)”.