Eat, Click, Judge: The Rise of Cyber Jurors on China’s Food Apps


Article from Ye Zhanhang: “From unwanted ingredients in takeaway meals and negative restaurant reviews to late deliveries and poor product quality, digital marketplaces teem with minor frustrations. 

But because they affect customer satisfaction and business reputations, several Chinese online shopping platforms have come up with a unique solution: Ordinary users can become “cyber jurors” to deliberate and cast decisive votes in resolving disputes between buyers and sellers.

Though introduced in 2020, the concept has surged in popularity among young Chinese in recent months, primarily fueled by viral cases that users eagerly follow, scrutinizing every detail and deliberation online…

To be eligible for the role, a user must meet certain criteria, including having a verified account, maintaining consumption records within the past three months, and successfully navigating five mock cases as part of an entry test. Cyber jurors don’t receive any money for completing cases but may be rewarded with coupons.

Xianyu, an online secondhand shopping platform, has also introduced a “court” system that assembles a jury of 17 volunteer users to adjudicate disputes between buyers and sellers. 

Miao Mingyu, a law professor at the University of Chinese Academy of Social Sciences, told China Youth Daily that this public jury function, with its impartial third-party perspective, has the potential to enhance transaction transparency and the fairness of the platform’s evaluation system.

Despite Chinese law prohibiting platforms from removing user reviews of products, Miao noted that this feature has enabled the platform to effectively address unfair negative reviews without violating legal constraints…(More)”.

How can Mixed Reality and AI improve emergency medical care?


Springwise: “Mixed reality (MR) refers to technologies that create immersive computer-generated environments in which parts of the physical and virtual environment are combined. With potential applications that range from education and engineering to entertainment, the market for MR is forecast to record revenues of just under $25 billion by 2032. Now, in a ground-breaking partnership, Singapore-based company Mediwave is teaming up with Sri Lanka’s 1990 Suwa Seriya to deploy MR and artificial intelligence (AI) to create a fully connected ambulance.

1990 Suwa Seriya is Sri Lanka’s national pre-hospital emergency ambulance service, which boasts response times that surpass even some services in developed countries. The innovative ambulance it has deployed uses Mediwave’s integrated Emergency Response Suite, which combines the latest communications equipment with internet-of-things (IoT) and AR capabilities to enhance the efficiency of the emergency response eco-system.

The connected ambulance ensures swift response times and digitises critical processes, while specialised care can be provided remotely through a Microsoft HoloLens. The technology enables Emergency Medical Technicians (EMTs) – staff who man ambulances in Sri Lanka – to connect with physicians at the Emergency Command and Control Centre. These physicians help the EMTs provide care during the so-called ‘golden hour’ of medical emergencies – the concept that rapid clinical investigation and care within 60 minutes of a traumatic injury is essential for a positive patient outcome…

Other applications of extended reality in the Springwise library include holograms that are used to train doctorsvirtual environments for treating phobias, and an augmented reality contact lens…(More)”.

‘Turning conflicts into co-creation’: Taiwan government harnesses digital policy for democracy


Article by  Si Ying Thian: “Assistive intelligence and language models can help facilitate nuanced conversations because the human brain simply cannot process 1,000 different positions, said Audrey Tang, Taiwan’s Digital Minister in charge of the Ministry of Digital Affairs (MODA).  

Tang was speaking at a webinar about policymaking in the digital age, hosted by LSE IDEAS, the think tank of the London School of Economics, on 1 December 2023.  

She cited Talk to the City, a large language model that transforms transcripts from a variety of datasets into clusters of similar opinions, as an example of a technology that has helped increase collaboration and diversity without losing the ability to scale…

“The idea is to establish value-based, long-term collaborations based on the idea of public code. This is evident in many of our government websites, which very much look like the UK’s,” said Tang. 

Public code is defined by Foundation of Public Code as an open-source software developed by public organisations, together with policy and guidance needed for collaboration and reuse…

The government’s commitment to open source is also evident in its rollout of the Taiwan Employment Gold Card, which integrates a flexible work permit, a residence visa for up to three years, and eligibility for national health insurance and income tax reduction.  

According to Tang, the Taiwan government invites anyone with experience of eight years or more in contributing to open source or a Web3 publicly available ledger to enrol in the residency program…(More)”.

Indigenous Peoples and Local Communities Are Using Satellite Data to Fight Deforestation


Article by Katie Reytar, Jessica Webb and Peter Veit: “Indigenous Peoples and local communities hold some of the most pristine and resource-rich lands in the world — areas highly coveted by mining and logging companies and other profiteers.  Land grabs and other threats are especially severe in places where the government does not recognize communities’ land rights, or where anti-deforestation and other laws are weak or poorly enforced. It’s the reason many Indigenous Peoples and local communities often take land monitoring into their own hands — and some are now using digital tools to do it. 

Freely available satellite imagery and data from sites like Global Forest Watch and LandMark provide near-real-time information that tracks deforestation and land degradation. Indigenous and local communities are increasingly using tools like this to gather evidence that deforestation and degradation are happening on their lands, build their case against illegal activities and take legal action to prevent it from continuing.  

Three examples from Suriname, Indonesia and Peru illustrate a growing trend in fighting land rights violations with data…(More)”.

AI-tocracy


Article by Peter Dizikes: “It’s often believed that authoritarian governments resist technical innovation in a way that ultimately weakens them both politically and economically. But a more complicated story emerges from a new study on how China has embraced AI-driven facial recognition as a tool of repression. 

“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI,” says coauthor Martin Beraja, an MIT economist. Not only has use of the technology apparently worked to suppress dissent, but it has spurred software development. The scholars call this mutually reinforcing situation an “AI-tocracy.” 

In fact, they found, firms that were granted a government contract for facial-recognition technologies produce about 49% more software products in the two years after gaining the contract than before. “We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Adding it all up, the case of China indicates how autocratic governments can potentially find their political power enhanced, rather than upended, when they harness technological advances—and even generate more economic growth than they would have otherwise…(More)”.

India’s persistent, gendered digital divide


Article by Caiwei Chen: “In a society where women, especially unmarried girls, still have to fight to own a smartphone, would men — and institutional patriarchy — really be willing to share political power?

In September, the Indian government passed a landmark law, under which a third of the seats in the lower house and state assemblies would be reserved for women. Amid the euphoria of celebrating this development, a somewhat cynical question I’ve been thinking about is: Why do only 31% of women own a mobile phone in India compared to over 60% of men? This in a country that is poised to have 1 billion smartphone users by 2026.

It’s not that the euphoria is without merit. Twenty-seven years after the idea was first birthed, the Narendra Modi government was able to excavate the issue out of the deep freeze and breathe it back into life. The execution of the quota will still take a few years as it has been linked to the redrawing of constituency boundaries.

But in the meantime, as women, we should brace ourselves for the pushbacks — small and big — that will come our way.

In an increasingly wired world, this digital divide has real-life consequences.  

The gender gap — between men and women, boys and girls — isn’t only about cellular phones and internet access. This inequity perfectly encapsulates all the other biases that India’s women have had to contend with — from a disparity in education opportunities to overzealous moral policing. It is about denying women power — and even bodily autonomy…(More)”.

Wastewater monitoring: ‘the James Webb Telescope for population health’


Article by Exemplars News: “When the COVID-19 pandemic triggered a lockdown across Bangladesh and her research on environmental exposure to heavy metals became impossible to continue, Dr. Rehnuma Haque began a search for some way she could contribute to the pandemic response.

“I knew I had to do something during COVID,” said Dr. Haque, a research scientist at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b). “I couldn’t just sit at home.”

Then she stumbled upon articles on early wastewater monitoring efforts for COVID in Australia, the NetherlandsItaly, and the United States. “When I read those papers, I was so excited,” said Dr. Haque. “I emailed my supervisor, Dr. Mahbubur Rahman, and said, ‘Can we do this?’”

Two months later, in June 2020, Dr. Haque and her colleagues had launched one of the most robust and earliest national wastewater surveillance programs for COVID in a low- or middle-income country (LMIC).

The initiative, which has now been expanded to monitor for cholera, salmonella, and rotavirus and may soon be expanded further to monitor for norovirus and antibiotic resistance, demonstrates the power and potential of wastewater surveillance to serve as a low-cost tool for obtaining real-time meaningful health data at scale to identify emerging risks and guide public health responses.

“It is improving public health outcomes,” said Dr. Haque. “We can see everything going on in the community through wastewater surveillance. You can find everything you are looking for and then prepare a response.”

A single wastewater sample can yield representative data about an entire ward, town, or county and allow LMICs to monitor for emerging pathogens. Compared with clinical monitoring, wastewater monitoring is easier and cheaper to collect, can capture infections that are asymptomatic or before symptoms arise, raises fewer ethical concerns, can be more inclusive and not as prone to sampling biases, can generate a broader range of data, and is unrivaled at quickly generating population-level data…(More)” – See also: The #Data4Covid19 Review

What can harnessing ‘positive deviance’ methods do for food security?


Article by Katrina J. Lane: “What the researchers identified in Niger, in this case, is known as “positive deviance”. It’s a concept that originated in 1991 during a nutrition program in Vietnam run by Save the Children. Instead of focusing on the population level, project managers studied outliers in the system — children who were healthier than their peers despite sharing similar circumstances, and then looked at what the parents of these children did differently.

Once the beneficial practices were identified — in this case, that included collecting wild foods, such as crab, shrimp, and sweet potato tops for their children — they encouraged mothers to tell other parents. Through this outlier-centric approach, the project was able to reduce malnourishment by 74%.

“The positive deviance approach assumes that in every community there are individuals or groups that develop uncommon behaviors or practices which help them cope better with the challenges they face than their peers,” said Boy.

It’s important to be respectful and acknowledge success stories already present in systems, added Duncan Green, a strategic adviser for Oxfam and a professor in practice in international development at the London School of Economics.

Positive deviance emphasizes the benefit of identifying and amplifying these “deviant behaviors”, as they hold the potential to generate scalable solutions that can benefit the entire community.

It can be broken down into three steps: First, identifying high-performing individuals or groups within a challenging context. Next, an investigative process in the community via in-person interviews, group discussions, and questionnaires to find what their behaviors and practices are. Finally, it means encouraging solutions to be spread throughout the community.

In the final stage, the approach relies on community-generated solutions — which Green explains are more likely to propagate and be engaged with…(More)”.

AI By the People, For the People


Article by Billy Perrigo/Karnataka: “…To create an effective English-speaking AI, it is enough to simply collect data from where it has already accumulated. But for languages like Kannada, you need to go out and find more.

This has created huge demand for datasets—collections of text or voice data—in languages spoken by some of the poorest people in the world. Part of that demand comes from tech companies seeking to build out their AI tools. Another big chunk comes from academia and governments, especially in India, where English and Hindi have long held outsize precedence in a nation of some 1.4 billion people with 22 official languages and at least 780 more indigenous ones. This rising demand means that hundreds of millions of Indians are suddenly in control of a scarce and newly-valuable asset: their mother tongue.

Data work—creating or refining the raw material at the heart of AI— is not new in India. The economy that did so much to turn call centers and garment factories into engines of productivity at the end of the 20th century has quietly been doing the same with data work in the 21st. And, like its predecessors, the industry is once again dominated by labor arbitrage companies, which pay wages close to the legal minimum even as they sell data to foreign clients for a hefty mark-up. The AI data sector, worth over $2 billion globally in 2022, is projected to rise in value to $17 billion by 2030. Little of that money has flowed down to data workers in India, Kenya, and the Philippines.

These conditions may cause harms far beyond the lives of individual workers. “We’re talking about systems that are impacting our whole society, and workers who make those systems more reliable and less biased,” says Jonas Valente, an expert in digital work platforms at Oxford University’s Internet Institute. “If you have workers with basic rights who are more empowered, I believe that the outcome—the technological system—will have a better quality as well.”

In the neighboring villages of Alahalli and Chilukavadi, one Indian startup is testing a new model. Chandrika works for Karya, a nonprofit launched in 2021 in Bengaluru (formerly Bangalore) that bills itself as “the world’s first ethical data company.” Like its competitors, it sells data to big tech companies and other clients at the market rate. But instead of keeping much of that cash as profit, it covers its costs and funnels the rest toward the rural poor in India. (Karya partners with local NGOs to ensure access to its jobs go first to the poorest of the poor, as well as historically marginalized communities.) In addition to its $5 hourly minimum, Karya gives workers de-facto ownership of the data they create on the job, so whenever it is resold, the workers receive the proceeds on top of their past wages. It’s a model that doesn’t exist anywhere else in the industry…(More)”.

China’s new AI rules protect people — and the Communist Party’s power


Article by Johanna M. Costigan: “In April, in an effort to regulate rapidly advancing artificial intelligence technologies, China’s internet watchdog introduced draft rules on generative AI. They cover a wide range of issues — from how data is trained to how users interact with generative AI such as chatbots. 

Under the new regulations, companies are ultimately responsible for the “legality” of the data they use to train AI models. Additionally, generative AI providers must not share personal data without permission, and must guarantee the “veracity, accuracy, objectivity, and diversity” of their pre-training data. 

These strict requirements by the Cyberspace Administration of China (CAC) for AI service providers could benefit Chinese users, granting them greater protections from private companies than many of their global peers. Article 11 of the regulations, for instance, prohibits providers from “conducting profiling” on the basis of information gained from users. Any Instagram user who has received targeted ads after their smartphone tracked their activity would stand to benefit from this additional level of privacy.  

Another example is Article 10 — it requires providers to employ “appropriate measures to prevent users from excessive reliance on generated content,” which could help prevent addiction to new technologies and increase user safety in the long run. As companion chatbots such as Replika become more popular, companies should be responsible for managing software to ensure safe use. While some view social chatbots as a cure for loneliness, depression, and social anxiety, they also present real risks to users who become reliant on them…(More)”.