China just announced a new social credit law. Here’s what it means.


Article by Zeyi Yang: “It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced a six-year plan to build a system to reward actions that build trust in society and penalize the opposite, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

For most people outside China, the words “social credit system” conjure up an instant image: a Black Mirror–esque web of technologies that automatically score all Chinese citizens according to what they did right and wrong. But the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either. 

Instead, the system that the central government has been slowly working on is a mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values—however vague that last goal in particular sounds. There’s no evidence yet that this system has been abused for widespread social control (though it remains possible that it could be wielded to restrict individual rights). 

While local governments have been much more ambitious with their innovative regulations, causing more controversies and public pushback, the countrywide social credit system will still take a long time to materialize. And China is now closer than ever to defining what that system will look like. On November 14, several top government agencies collectively released a draft law on the Establishment of the Social Credit System, the first attempt to systematically codify past experiments on social credit and, theoretically, guide future implementation. 

Yet the draft law still left observers with more questions than answers. 

“This draft doesn’t reflect a major sea change at all,” says Jeremy Daum, a senior fellow of the Yale Law School Paul Tsai China Center who has been tracking China’s social credit experiment for years. It’s not a meaningful shift in strategy or objective, he says. 

Rather, the law stays close to local rules that Chinese cities like Shanghai have released and enforced in recent years on things like data collection and punishment methods—just giving them a stamp of central approval. It also doesn’t answer lingering questions that scholars have about the limitations of local rules. “This is largely incorporating what has been out there, to the point where it doesn’t really add a whole lot of value,” Daum adds. 

So what is China’s current system actually like? Do people really have social credit scores? Is there any truth to the image of artificial-intelligence-powered social control that dominates Western imagination? …(More)”.

How Food Delivery Workers Shaped Chinese Algorithm Regulations


Article by Matt Sheehan and Sharon Du: “In 2021, China issued a series of policy documents aimed at governing the algorithms that underpin much of the internet today. The policies included a regulation on recommendation algorithms and a draft regulation on synthetically generated media, commonly known as deepfakes. Domestically, Chinese media touted the recommendation engine regulations for the options they gave Chinese internet users, such as the choice to “turn off the algorithm” on major platforms. Outside China, these regulations have largely been seen through the prism of global geopolitics, framed as questions over whether China is “ahead” in algorithm regulations or whether it will export a “Chinese model” of artificial intelligence (AI) governance to the rest of the world.

These are valid questions with complex answers, but they overlook the core driver of China’s algorithm regulations: they are designed primarily to address China’s domestic social, economic, and political problems. The Chinese Communist Party (CCP) is the ultimate arbiter here, deciding both what counts as a problem and how it should be solved. But the CCP doesn’t operate in a vacuum. Like any governing party, it is constantly creating new policies to try to put out fires, head off problems, and respond to public desires.

Through a short case study, we can see how Chinese food delivery drivers, investigative journalists, and academics helped shape one part of the world’s first regulations on recommendation algorithms. From that process, we can learn how international actors might better predict and indirectly influence Chinese algorithm policy…(More)”.

Avert Bangladesh’s looming water crisis through open science and better data


Article by Augusto Getirana et al: “Access to data is a huge problem. Bangladesh collects a large amount of hydrological data, such as for stream flow, surface and groundwater levels, precipitation, water quality and water consumption. But these data are not readily available: researchers must seek out officials individually to gain access. India’s hydrological data can be similarly hard to obtain, preventing downstream Bangladesh from accurately predicting flows into its rivers.

Bilateral scientific collaboration between Bangladesh and water-sharing nations, including India, Nepal, Bhutan and China, would be mutually beneficial. The decades-long Mekong River Commission between Cambodia, Laos, Thailand and Vietnam is one successful transboundary agreement that could serve as a model.

Publishing hydrological data in an open-access database would be an exciting step. For now, however, the logistics, funding and politics to make on-the-ground data publicly available are likely to remain out of reach.

Fortunately, satellite data can help to fill the gaps. Current Earth-observing satellite missions, such as the Gravity Recovery and Climate Experiment (GRACE) Follow-On, the Global Precipitation Measurement (GPM) network, multiple radar altimeters and the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors make data freely available and can provide an overall picture of water availability across the country (this is what we used in many of our analyses). The picture is soon to improve. In December, NASA and CNES, France’s space agency, plan to launch the Surface Water and Ocean Topography (SWOT) satellite mission. SWOT will provide unprecedented information on global ocean and inland surface waters at fine spatial resolution, allowing for much more detailed monitoring of water levels than is possible today. The international scientific community has been working hard over the past 15 years to get ready to store, process and use SWOT data.

New open-science initiatives, particularly NASA’s Earth Information System, launched in 2021, can help by supporting the development of customized data-analysis and modelling tools (see go.nature.com/3cffbh9). The data we present here were acquired in this framework. We are currently working on an advanced hydrological model that will be capable of representing climate-change effects and human impacts on Bangladesh’s water availability. We expect that the co-development of such a modelling system with local partners will support decision-making.

SERVIR, a joint programme of NASA and the US Agency for International Development that focuses on capacity-building, could also help improve forecasting of severe weather for Bangladesh, for example. This could improve the flood monitoring and forecast system operated by the Bangladesh Water Development Board, which is limited in geographical scope — flooding is monitored only at specific locations, not across the country. Such efforts will help with short-term adaptation and emergency responses to flood conditions, and with long-term planning for infrastructure…(More)”.

Can Social Media Rhetoric Incite Hate Incidents? Evidence from Trump’s “Chinese Virus” Tweets


Paper by Andy Cao, Jason M. Lindo & Jiee Zhong: “We will investigate whether Donald Trump’s “Chinese Virus” tweets contributed to the rise of anti-Asian incidents. We find that the number of incidents spiked following Trump’s initial “Chinese Virus” tweets and the subsequent dramatic rise in internet search activity for the phrase. Difference-in-differences and event-study analyses leveraging spatial variation indicate that this spike in anti-Asian incidents was significantly more pronounced in counties that supported Donald Trump in the 2016 presidential election relative to those that supported Hillary Clinton. We estimate that anti-Asian incidents spiked by 4000 percent in Trump-supporting counties, over and above the spike observed in Clinton-supporting counties…(More)”.

Society 5.0, Digital Transformation and Disasters


Book edited by Sakiko Kanbara, Rajib Shaw, Naonori Kato, Hiroyuki Miyazaki, and Akira Morita: “This book presents the evolution of the science technology paradigm in Japan and analyzes the critical community and local governance issues from the perspectives of the changing risk landscape, Society 5.0, and digital transformation. It also provides suggestions for the future development of a resilient society and community, by drawing lessons from other countries.Advancements in science technology in recent decades in Japan and the world might have increased our capacity to tackle the adverse human consequences of various kinds of disasters and environmental issues. However, the accompanied and interlinking phenomena of urbanization, climate change, rural to urban migration, population decreases, and aged population have posed new challenges, especially in the small, medium-sized cities, and in rural areas of Japan. This is also enhanced by the risk of cascading, complex and systemic risk, which is defining a new normal as “living with uncertainties”.
Society 5.0 is defined as “A human-centered society that balances economic advancement with the resolution of social problems by a system that highly integrates cyberspace and physical space.” Society 5.0 was proposed in the 5th Science and Technology Basic Plan as a future society that Japan should aspire to. Society 5.0 achieves a high degree of convergence between cyberspace (virtual space) and physical space (real space), compared with the past information society (Society 4.0) that people would access a cloud service (databases) in cyberspace via the Internet and search for, retrieve, and analyze information or data…(More)”.

Minben 民本 as an alternative to liberal democracy


Essay by Rongxin Li: “Although theorists have proposed non-Western types of democracy, such as Asian Democracy, they have nevertheless actively marginalised these non-Western types. This is partly due to Asian Democracy’s  inextricable link with Confucian traditions – many of which people commonly assume to be anti-democratic. This worry over Confucian values does not, however, detract from the fact that scholars are deliberately ignoring non-Western types of democracy because they do not follow Western narratives. ..

Minben is a paternalistic model of democracy. It does not involve multi-party elections and, unlike in liberal democracy, disorderly public participation is not one of its priorities. Minben relies on a theory of governance that believes carefully selected elites, usually a qualified minority, can use their knowledge and the constant pursuit of virtuous conduct to deliver the common good.

Liberal democracy maintains its legitimacy through periodic and competitive elections. Minben retains its legitimacy through its ‘output’. It is results, or policy implementation, oriented. Some argue that this performance-driven democracy cannot endure because it depends on people buying into it and consistently supporting it. But we could say the same of any democratic regime. Liberal democracy’s legitimacy is not unassailable – nor is it guaranteed.

Indeed, liberal democracy and Minben have more in common than many Western theorists concede. As Yu Keping underlined, stability is paramount in Chinese Communist Party ideology. John Keane, for example, once likened government and its legitimacy to a slippery egg. The greater the social instability, which may be driven by displeasure over the performance of ruling elites, the slipperier the egg becomes for the elites in question. Both liberal democratic regimes and Minben regimes face the same problem of dealing with social turmoil. Both look to serving the people as a means to staying atop the egg…

Minben – and this may surprise certain Western theorists – does not exclude public participation and deliberation. These instruments convey public voices and concerns to the selected technocrats tasked with deciding for the people. There is representation based on consultation here. Technocrats seek to make good decisions based on full consultation and analysis of public preferences…(More)”.

A ‘Feminist’ Server to Help People Own Their Own Data


Article by Padmini Ray Murray: “All of our digital lives reside on servers – mostly in corporate server farms owned by the likes of Google, Amazon, Apple, and Microsoft.  These farms contain machines that store massive volumes of data generated by every single user of the internet. These vast infrastructures allow people to store, connect, and exchange information on the internet. 

Consequently, there is a massive distance between users and where and how the data is stored, which means that individuals have very little control over how their data is stored and used. However, due to the huge reliance on these massive corporate technologies, individuals are left with very little choice but to accept the terms dictated by these businesses. The conceptual alternative of the feminist server was created by groups of feminist and queer activists who were concerned about how little power they have over owning and managing their data on the internet. The idea of the feminist server was described as a project that is interested in “creating a more autonomous infrastructure to ensure that data, projects and memory of feminist groups are properly accessible, preserved and managed” – a safe digital library to store and manage content generated by feminist groups. This was also a direct challenge to the traditionally male-dominated spaces of computer hardware management, spaces which could be very exclusionary and hostile to women or queer individuals who might be interested in learning how to use these technologies. 

There are two related ways by which a server can be considered as feminist. The first is based on who runs the server, and the second is based on who owns the server. Feminist critics have pointed out how the running of servers is often in the hands of male experts who are not keen to share and explain the knowledge required to maintain a server – a role known as a systems admin or, colloquially, a “sysadmin” person. Thus the concept of feminist servers emerged out of a need to challenge patriarchal dominance in hardware and infrastructure spaces, to create alternatives that were nurturing, anti-capitalist, and worked on the basis of community and solidarity…(More)”.

Breakthroughs in Smart City Implementation


Book edited by Leo P. Ligthart and Ramjee Prasad: “Breakthroughs in Smart City Implementation should give answers on a wide variety of present social, political and technological problems. Green and long-lasting solutions are needed in coming 10 years and beyond on areas as green and long lasting solutions for improving air quality, quality of life of residents in cities, traffic congestions and many more.Two Conasense branches, established in China and in India, report in six book chapters on initiatives needed to overcome the obvious shortcomings at present. Three more chapters complete this fifth Conasense book: an introductory chapter concerning Smart City from Conasense perspective, a chapter showing that not technology but the people in the cities are most important and a chapter on recent results and prospects of “Human in the Loop” in smart vehicular systems…(More)”.

The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

China May Be Chasing Impossible Dream by Trying to Harness Internet Algorithms


Article by Karen Hao: “China’s powerful cyberspace regulator has taken the first step in a pioneering—and uncertain—government effort to rein in the automated systems that shape the internet.

Earlier this month, the Cyberspace Administration of China published summaries of 30 core algorithms belonging to two dozen of the country’s most influential internet companies, including TikTok owner ByteDance Ltd., e-commerce behemoth Alibaba Group Holding Ltd. and Tencent Holdings Ltd., owner of China’s ubiquitous WeChat super app.

The milestone marks the first systematic effort by a regulator to compel internet companies to reveal information about the technologies powering their platforms, which have shown the capacity to radically alter everything from pop culture to politics. It also puts Beijing on a path that some technology experts say few governments, if any, are equipped to handle….

One important question the effort raises, algorithm experts say, is whether direct government regulation of algorithms is practically possible.

The majority of today’s internet platform algorithms are based on a technology called machine learning, which automates decisions such as ad-targeting by learning to predict user behaviors from vast repositories of data. Unlike traditional algorithms that contain explicit rules coded by engineers, most machine-learning systems are black boxes, making it hard to decipher their logic or anticipate the consequences of their use.

Beijing’s interest in regulating algorithms started in 2020, after TikTok sought an American buyer to avoid being banned in the U.S., according to people familiar with the government’s thinking. When several bidders for the short-video platform lost interest after Chinese regulators announced new export controls on information-recommendation technology, it tipped off Beijing to the importance of algorithms, the people said…(More)”.