AI-tocracy


Article by Peter Dizikes: “It’s often believed that authoritarian governments resist technical innovation in a way that ultimately weakens them both politically and economically. But a more complicated story emerges from a new study on how China has embraced AI-driven facial recognition as a tool of repression. 

“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI,” says coauthor Martin Beraja, an MIT economist. Not only has use of the technology apparently worked to suppress dissent, but it has spurred software development. The scholars call this mutually reinforcing situation an “AI-tocracy.” 

In fact, they found, firms that were granted a government contract for facial-recognition technologies produce about 49% more software products in the two years after gaining the contract than before. “We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Adding it all up, the case of China indicates how autocratic governments can potentially find their political power enhanced, rather than upended, when they harness technological advances—and even generate more economic growth than they would have otherwise…(More)”.

India’s persistent, gendered digital divide


Article by Caiwei Chen: “In a society where women, especially unmarried girls, still have to fight to own a smartphone, would men — and institutional patriarchy — really be willing to share political power?

In September, the Indian government passed a landmark law, under which a third of the seats in the lower house and state assemblies would be reserved for women. Amid the euphoria of celebrating this development, a somewhat cynical question I’ve been thinking about is: Why do only 31% of women own a mobile phone in India compared to over 60% of men? This in a country that is poised to have 1 billion smartphone users by 2026.

It’s not that the euphoria is without merit. Twenty-seven years after the idea was first birthed, the Narendra Modi government was able to excavate the issue out of the deep freeze and breathe it back into life. The execution of the quota will still take a few years as it has been linked to the redrawing of constituency boundaries.

But in the meantime, as women, we should brace ourselves for the pushbacks — small and big — that will come our way.

In an increasingly wired world, this digital divide has real-life consequences.  

The gender gap — between men and women, boys and girls — isn’t only about cellular phones and internet access. This inequity perfectly encapsulates all the other biases that India’s women have had to contend with — from a disparity in education opportunities to overzealous moral policing. It is about denying women power — and even bodily autonomy…(More)”.

Wastewater monitoring: ‘the James Webb Telescope for population health’


Article by Exemplars News: “When the COVID-19 pandemic triggered a lockdown across Bangladesh and her research on environmental exposure to heavy metals became impossible to continue, Dr. Rehnuma Haque began a search for some way she could contribute to the pandemic response.

“I knew I had to do something during COVID,” said Dr. Haque, a research scientist at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b). “I couldn’t just sit at home.”

Then she stumbled upon articles on early wastewater monitoring efforts for COVID in Australia, the NetherlandsItaly, and the United States. “When I read those papers, I was so excited,” said Dr. Haque. “I emailed my supervisor, Dr. Mahbubur Rahman, and said, ‘Can we do this?’”

Two months later, in June 2020, Dr. Haque and her colleagues had launched one of the most robust and earliest national wastewater surveillance programs for COVID in a low- or middle-income country (LMIC).

The initiative, which has now been expanded to monitor for cholera, salmonella, and rotavirus and may soon be expanded further to monitor for norovirus and antibiotic resistance, demonstrates the power and potential of wastewater surveillance to serve as a low-cost tool for obtaining real-time meaningful health data at scale to identify emerging risks and guide public health responses.

“It is improving public health outcomes,” said Dr. Haque. “We can see everything going on in the community through wastewater surveillance. You can find everything you are looking for and then prepare a response.”

A single wastewater sample can yield representative data about an entire ward, town, or county and allow LMICs to monitor for emerging pathogens. Compared with clinical monitoring, wastewater monitoring is easier and cheaper to collect, can capture infections that are asymptomatic or before symptoms arise, raises fewer ethical concerns, can be more inclusive and not as prone to sampling biases, can generate a broader range of data, and is unrivaled at quickly generating population-level data…(More)” – See also: The #Data4Covid19 Review

What can harnessing ‘positive deviance’ methods do for food security?


Article by Katrina J. Lane: “What the researchers identified in Niger, in this case, is known as “positive deviance”. It’s a concept that originated in 1991 during a nutrition program in Vietnam run by Save the Children. Instead of focusing on the population level, project managers studied outliers in the system — children who were healthier than their peers despite sharing similar circumstances, and then looked at what the parents of these children did differently.

Once the beneficial practices were identified — in this case, that included collecting wild foods, such as crab, shrimp, and sweet potato tops for their children — they encouraged mothers to tell other parents. Through this outlier-centric approach, the project was able to reduce malnourishment by 74%.

“The positive deviance approach assumes that in every community there are individuals or groups that develop uncommon behaviors or practices which help them cope better with the challenges they face than their peers,” said Boy.

It’s important to be respectful and acknowledge success stories already present in systems, added Duncan Green, a strategic adviser for Oxfam and a professor in practice in international development at the London School of Economics.

Positive deviance emphasizes the benefit of identifying and amplifying these “deviant behaviors”, as they hold the potential to generate scalable solutions that can benefit the entire community.

It can be broken down into three steps: First, identifying high-performing individuals or groups within a challenging context. Next, an investigative process in the community via in-person interviews, group discussions, and questionnaires to find what their behaviors and practices are. Finally, it means encouraging solutions to be spread throughout the community.

In the final stage, the approach relies on community-generated solutions — which Green explains are more likely to propagate and be engaged with…(More)”.

AI By the People, For the People


Article by Billy Perrigo/Karnataka: “…To create an effective English-speaking AI, it is enough to simply collect data from where it has already accumulated. But for languages like Kannada, you need to go out and find more.

This has created huge demand for datasets—collections of text or voice data—in languages spoken by some of the poorest people in the world. Part of that demand comes from tech companies seeking to build out their AI tools. Another big chunk comes from academia and governments, especially in India, where English and Hindi have long held outsize precedence in a nation of some 1.4 billion people with 22 official languages and at least 780 more indigenous ones. This rising demand means that hundreds of millions of Indians are suddenly in control of a scarce and newly-valuable asset: their mother tongue.

Data work—creating or refining the raw material at the heart of AI— is not new in India. The economy that did so much to turn call centers and garment factories into engines of productivity at the end of the 20th century has quietly been doing the same with data work in the 21st. And, like its predecessors, the industry is once again dominated by labor arbitrage companies, which pay wages close to the legal minimum even as they sell data to foreign clients for a hefty mark-up. The AI data sector, worth over $2 billion globally in 2022, is projected to rise in value to $17 billion by 2030. Little of that money has flowed down to data workers in India, Kenya, and the Philippines.

These conditions may cause harms far beyond the lives of individual workers. “We’re talking about systems that are impacting our whole society, and workers who make those systems more reliable and less biased,” says Jonas Valente, an expert in digital work platforms at Oxford University’s Internet Institute. “If you have workers with basic rights who are more empowered, I believe that the outcome—the technological system—will have a better quality as well.”

In the neighboring villages of Alahalli and Chilukavadi, one Indian startup is testing a new model. Chandrika works for Karya, a nonprofit launched in 2021 in Bengaluru (formerly Bangalore) that bills itself as “the world’s first ethical data company.” Like its competitors, it sells data to big tech companies and other clients at the market rate. But instead of keeping much of that cash as profit, it covers its costs and funnels the rest toward the rural poor in India. (Karya partners with local NGOs to ensure access to its jobs go first to the poorest of the poor, as well as historically marginalized communities.) In addition to its $5 hourly minimum, Karya gives workers de-facto ownership of the data they create on the job, so whenever it is resold, the workers receive the proceeds on top of their past wages. It’s a model that doesn’t exist anywhere else in the industry…(More)”.

China’s new AI rules protect people — and the Communist Party’s power


Article by Johanna M. Costigan: “In April, in an effort to regulate rapidly advancing artificial intelligence technologies, China’s internet watchdog introduced draft rules on generative AI. They cover a wide range of issues — from how data is trained to how users interact with generative AI such as chatbots. 

Under the new regulations, companies are ultimately responsible for the “legality” of the data they use to train AI models. Additionally, generative AI providers must not share personal data without permission, and must guarantee the “veracity, accuracy, objectivity, and diversity” of their pre-training data. 

These strict requirements by the Cyberspace Administration of China (CAC) for AI service providers could benefit Chinese users, granting them greater protections from private companies than many of their global peers. Article 11 of the regulations, for instance, prohibits providers from “conducting profiling” on the basis of information gained from users. Any Instagram user who has received targeted ads after their smartphone tracked their activity would stand to benefit from this additional level of privacy.  

Another example is Article 10 — it requires providers to employ “appropriate measures to prevent users from excessive reliance on generated content,” which could help prevent addiction to new technologies and increase user safety in the long run. As companion chatbots such as Replika become more popular, companies should be responsible for managing software to ensure safe use. While some view social chatbots as a cure for loneliness, depression, and social anxiety, they also present real risks to users who become reliant on them…(More)”.

If good data is key to decarbonization, more than half of Asia’s economies are being locked out of progress, this report says


Blog by Ewan Thomson: “If measuring something is the first step towards understanding it, and understanding something is necessary to be able to improve it, then good data is the key to unlocking positive change. This is particularly true in the energy sector as it seeks to decarbonize.

But some countries have a data problem, according to energy think tank Ember and climate solutions enabler Subak’s Asia Data Transparency Report 2023, and this lack of open and reliable power-generation data is holding back the speed of the clean power transition in the region.

Asia is responsible for around 80% of global coal consumption, making it a big contributor to carbon emissions. Progress is being made on reducing these emissions, but without reliable data on power generation, measuring the rate of this progress will be challenging.

These charts show how different Asian economies are faring on data transparency on power generation and what can be done to improve both the quality and quantity of the data.

Infographic showing the number of economies by overall transparency score.

Over half of Asian countries lack reliable data in their power sectors, Ember says. Image: Ember

There are major data gaps in 24 out of the 39 Asian economies covered in the Ember research. This means it is unclear whether the energy needs of the nearly 700 million people in these 24 economies are being met with renewables or fossil fuels…(More)”.

Air-Pollution Knowledge Is Power


Article by Chana R. Schoenberger: “What happens when people in countries where the government offers little pollution monitoring learn that the air quality is dangerous? A new study details how the US Embassy in Beijing began to monitor the Chinese capital’s air-pollution levels and tweet about them in 2008. The program later extended to other US embassies in cities around the world. The practice led to a measurable decline in air pollution in those cities, few of which had local pollution monitoring before, the researchers found.

The paper’s authors, Akshaya Jha, an assistant professor of economics and public policy at Carnegie Mellon University, and Andrea La Nauze, a lecturer at the School of Economics at the University of Queensland, used satellite data to compare pollution levels, measured annually. The researchers found that the level of air pollution went down after the local US embassy began tweeting pollution numbers from monitoring equipment that diplomatic personnel had installed.

The embassy program yielded a drop in fine-particulate concentration levels of 2 to 4 micrograms per square meter, leading to a decline in premature mortality worth $127 million for the median city in 2019. “Our findings point to the substantial benefits of improving the availability and salience of air-quality information in low- and middle-income countries,” Jha and La Nauze write.

News coverage of the US government’s Beijing pollution monitoring sparked the researchers’ interest, La Nauze says. At the time, American diplomats were quoted saying that the embassy’s tweets led to marked changes in pollution levels in Beijing. When the researchers learned that the US State Department had extended the program to embassies around the world, they thought there might be a way to evaluate the diplomats’ claims empirically.

A problem the researchers confronted was how to quantify the impact of measuring something that had never been measured before…(More)” – See also: US Embassy Air-Quality Tweets Led to Global Health Benefits

Gaming Public Opinion


Article by Albert Zhang , Tilla Hoja & Jasmine Latimore: “The Chinese Communist Party’s (CCP’s) embrace of large-scale online influence operations and spreading of disinformation on Western social-media platforms has escalated since the first major attribution from Silicon Valley companies in 2019. While Chinese public diplomacy may have shifted to a softer tone in 2023 after many years of wolf-warrior online rhetoric, the Chinese Government continues to conduct global covert cyber-enabled influence operations. Those operations are now more frequent, increasingly sophisticated and increasingly effective in supporting the CCP’s strategic goals. They focus on disrupting the domestic, foreign, security and defence policies of foreign countries, and most of all they target democracies.

Currently—in targeted democracies—most political leaders, policymakers, businesses, civil society groups and publics have little understanding of how the CCP currently engages in clandestine activities online in their countries, even though this activity is escalating and evolving quickly. The stakes are high for democracies, given the indispensability of the internet and their reliance on open online spaces, free from interference. Despite years of monitoring covert CCP cyber-enabled influence operations by social-media platforms, governments, and research institutes such as ASPI, definitive public attribution of the actors driving these activities is rare. Covert online operations, by design, are difficult to detect and attribute to state actors. 

Social-media platforms and governments struggle to devote adequate resources to identifying, preventing and deterring increasing levels of malicious activity, and sometimes they don’t want to name and shame the Chinese Government for political, economic and/or commercial reasons…(More)”.

AI translation is jeopardizing Afghan asylum claims


Article by Andrew Deck: “In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.

A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.

In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”

Mirkhail works with Respond Crisis Translation, a coalition of over 2,500 translators that provides interpretation and translation services for migrants and asylum seekers around the world. She told Rest of World this kind of small mistake can be life-changing for a refugee. In the wake of the Taliban’s return to power in Afghanistan, there is an urgent demand for crisis translators working in languages such as Pashto and Dari. Working alongside refugees, these translators can help clients navigate complex immigration systems, including drafting immigration forms such as asylum applications. But a new generation of machine translation tools is changing the landscape of this field — and adding a new set of risks for refugees…(More)”.