Recentring the demos in the measurement of democracy


Article by Seema Shah: “Rethinking how we measure and evaluate democratic performance is vital to reversing a longstanding negative trend in global democracy. We must confront the past, including democracy’s counter-intuitively intrinsic inequality. This is key to revitalising institutions in a way that allows democratic practice to live up to its potential…

In the global democracy assessment space, teams like the one I lead at International IDEA compete to provide the most rigorous, far-reaching and understandable set of democracy measurements in the world. Alexander Hudson explains how critical these indicators are, providing important benchmarks for democratic growth and decline to policymakers, governments, international organisations, and journalists.

Yet in so many ways, the core of what these datasets measure and help assess are largely the same. This redundancy is no doubt at least partially a product of wealthy donors’ prioritisation of liberal democracy as an ideal. It is compounded by how the measures are calculated. As Adam Przeworksi recently stated, reliance on expert coders runs the risk of measuring little other than those experts’ biases.

But if that is the case, and quantitative measurements continue to be necessary for democracy assessment, shouldn’t we rethink exactly what we are measuring and how we are measuring it?..

Democracy assessment indices do not typically measure ordinary people’s evaluations of the state of democracy. Instead, other specialised ‘barometers’ often take on this task. See, for example, AfrobarometerEurobarometerAsian Barometer, and LatinobarometroSurveys of public perceptions on a range of issues also exist, including, but not limited to democracy. The problem is, however, that these do not systematically make it into overall democracy assessments or onto policymakers’ desks. This means that policymakers and others do not consistently prioritise or consider lived experiences as they make decisions about democracy and human rights-related funding and interventions…(More)”.

How games can make behavioural science better


Article by Bria Long et al: “When US cognitive scientist Joshua Hartshorne was investigating how people around the world learn English, he needed to get tens of thousands of people to take a language test. He designed ‘Which English?’, a grammar game that presented a series of tough word problems and then guessed where in the world the player learnt the language. Participants shared their results — whether accurate or not — on social media, creating a snowball effect for recruitment. The findings, based on data from almost 670,000 people, revealed that there is a ‘critical period’ for second-language learning that extends into adolescence.

This sort of ‘gamification’ is becoming a powerful research tool across fields that study humans, including psychology, neuroscience, economics and behavioural economics. By making research fun, the approach can help experiments to reach thousands or millions of participants. For instance, experiments embedded in a video game demonstrated that the layout of the city where a child lives shapes their future navigational ability. Data from a digital word search showed that people who are skilled at the game do not necessarily give better advice to those trying to learn it. And a dilemma game involving millions of people revealed that most individuals have reliable moral intuition.

Gamification can help to avoid the pitfalls of conventional laboratory-based experiments by allowing researchers to study diverse populations, to conduct more-sophisticated experiments and to observe human behaviour in naturalistic environments. It can improve statistical power and reproducibility, making research more robust. Technical advances are making gamification cheaper and more straightforward, and the COVID-19 pandemic has forced many labs to move their human experiments online. But despite these changes, most have not yet embraced the opportunities gamification affords.

To reach the full potential of this approach, researchers must dispel misconceptions, develop new gamification technologies, improve access to existing ones and apply the methods to productive research questions. We are researchers in psychology, linguistics, developmental science, data science and music who have run our own gamified experiments. We think it’s time for science to get serious about games…(More)”.

Building Trust with the Algorithms in Our Lives


Essay by Dylan Walsh: “Algorithms are omnipresent in our increasingly digital lives. They offer us new music and friends. They recommend books and clothing. They deliver information about the world. They help us find romantic partners one day, efficient commutes the next, cancer diagnoses the third.

And yet most people display an aversion to algorithms. They don’t fully trust the recommendations made by computer programs. When asked, they prefer human predictions to those put forward by algorithms.

“But given the growing prevalence of algorithms, it seems important we learn to trust and appreciate them,” says Taly Reich, associate professor at Yale SOM. “Is there an intervention that would help reduce this aversion?”

New research conducted by Reich and two colleagues, Alex Kaju of HEC Montreal and Sam Maglio of the University of Toronto, finds that clearly demonstrating an algorithm’s ability to learn from past mistakes increases the trust that people place in the algorithm. It also inclines people to prefer the predictions made by algorithms over those made by humans.

In arriving at this result, Reich drew on her foundational work on the value of mistakes. In a series of prior papers, Reich has established how mistakes, in the right context, can create benefits; people who make mistakes can come across as more knowledgeable and credible than people who don’t. Applying this insight to predictive models, Reich and her colleagues investigated whether framing algorithms as capable of learning from their mistakes enhanced trust in the recommendations that algorithms make.

In one of several experiments, for instance, participants were asked whether a trained psychologist or an algorithm would be better at evaluating somebody’s personality. Under one condition, no further information was provided. In another condition, identical performance data for both the psychologist and the algorithm explicitly demonstrated improvement over time. In the first three months, each one was correct 60% of the time, incorrect 40% of the time; by six months, they were correct 70% of the time; and over the course of the first year the rate moved up to 80% correct.

Absent information about the capacity to learn, participants chose a psychologist over an algorithm 75% of the time. But when shown how the algorithm improved over time, they chose it 66% of the time—more often than the human. Participants overcame any potential algorithm aversion and instead expressed what Reich and her colleagues term “algorithm appreciation,” or even “algorithm investment,” by choosing it at a higher rate than the human. These results held across several different cases, from selecting the best artwork to finding a well-matched romantic partner. In every instance, when the algorithm exhibited learning over time, it was trusted more often than its human counterpart…(More)”

Here’s how the agricultural sector can solve its data problem


Article by Satyanarayana Jeedigunta and Arushi Goel: “Food and nutrition security, skewed distribution of farmer incomes, natural disasters and climate change are severely impacting the sustainability of agricultural systems across the globe. Policy reforms are needed to correct these distortions, but innovative emerging technologies like artificial intelligence, machine learning, distributed ledger technologies, sensors and drones, can make a significant difference.

Emerging technologies need data, and it must be the right data, for the right purpose at the right time. This is how it can deliver maximum impact. Agricultural value chains comprise a complex system of stakeholders and activities. The enormity of the size and complexity of agricultural data, coupled with its fragmented nature, pose significant challenges to unlocking its potential economic value, estimated at $65 billion in India alone….

As such, there is a need to promote standards-based interoperability, which enables multiple digital systems to exchange agricultural data in an automated manner with limited human intervention. The ease and speed of such an exchange of data, across domains and technologies, would spur the development of innovative solutions and lead to evidence-driven, prediction-based decision-making on the farm and in the market.

Most agricultural data is dynamic

Most current efforts to develop standards of agriculture data are isolated and localized. The AGROVOC initiative of the United Nations’ Food and Agriculture Organization addresses a part of the data problem by creating an exhaustive vocabulary of agricultural terms. There is also a need to develop an open data format for the automated interchange of agriculture data. A coordinated initiative of the industry is an attractive approach to develop such a format…(More)”.

Database States


Essay by Sanjana Varghese: “In early 2007, a package sent from the north of England to the National Audit Office (NAO) in London went missing. In it were two discs containing the personal records of twenty-five million people—including their addresses, birthdays, and national insurance numbers, which are required to work in the UK—that the NAO intended to use for an “independent survey” of the child benefits database to check for supposed fraud. Instead, that information was never recovered, a national scandal ensued, and the junior official who mailed the package was fired.

The UK, as it turns out, is not particularly adept at securing its data. In 2009, a group of British academics released a report calling the UK a “database state,” citing the existence of forty-six leaky databases that were poorly constructed and badly maintained. Databases that they examined ranged from one on childhood obesity rates (which recorded the height and weight measurements of every school pupil in the UK between the ages of five and eleven) to IDENT1, a police database containing the fingerprints of all known offenders. “In too many cases,” the researchers wrote, “the public are neither served nor protected by the increasingly complex and intrusive holdings of personal information, invading every aspect of our lives.”

In the years since, databases in the UK—and elsewhere—have only proliferated; increasingly manufactured and maintained by a nexus of private actors and state agencies, they are generated by and produce more and more information streams that inevitably have a material effect on the populations they’re used by and against. More than just a neutral method of storing information, databases shape and reshape the world around us; they aid and abet the state and private industry in matters of surveillance, police violence, environmental destruction, border enforcement, and more…(More)”.

Government must earn public trust that AI is being used safely and responsibly


Article by Sue Bateman and Felicity Burch: “Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.

In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.

One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.

Enabling transparent public sector use of algorithms and AI is vital for a number of reasons. 

Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.

Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.

Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI. 

For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”…(More)”.

Kid-edited journal pushes scientists for clear writing on complex topics


Article by Mark Johnson: “The reviewer was not impressed with the paper written by Israeli brain researcher Idan Segev and a colleague from Switzerland.

“Professor Idan,” she wrote to Segev. “I didn’t understand anything that you said.”

Segev and co-author Felix Schürmann revised their paper on the Human Brain project, a massive effort seeking to channel all that we know about the mind into a vast computer model. But once again the reviewer sent it back. Still not clear enough. It took a third version to satisfy the reviewer.

“Okay,” said the reviewer, an 11-year-old girl from New York named Abby. “Now I understand.”

Such is the stringent editing process at the online science journal Frontiers for Young Minds, where top scientists, some of them Nobel Prize winners, submit papers on gene-editinggravitational waves and other topics — to demanding reviewers ages 8 through 15.

Launched in 2013, the Lausanne, Switzerland-based publication is coming of age at a moment when skeptical members of the public look to scientists for clear guidance on the coronavirus and on potentially catastrophic climate change, among other issues. At Frontiers for Young Minds, the goal is not just to publish science papers but also to make them accessible to young readers like the reviewers. In doing so, it takes direct aim at a long-standing problem in science — poor communication between professionals and the public.

“Scientists tend to default to their own jargon and don’t think carefully about whether this is a word that the public actually knows,” said Jon Lorsch, director of the National Institute of General Medical Sciences. “Sometimes to actually explain something you need a sentence as opposed to the one word scientists are using.”

Dense language sends a message “that science is for scientists; that you have to be an ‘intellectual’ to read and understand scientific literature; and that science is not relevant or important for everyday life,” according to a paper published last year in Advances in Physiology Education.

Frontiers for Young Minds, which has drawn nearly 30 million online page views in its nine years, offers a different message on its homepage: “Science for kids, edited by kids.”..(More)”.

AI in the Common Interest


Article by Gabriela Ramos & Mariana Mazzucato: “In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.

Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disastersdesign energy-efficient buildingsimprove power storage, and optimize renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.

These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.

Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us…(More)”.

The World Needs Many More Public Servants


Article by Ngaire Woods: “The private sector alone cannot make the massive investments required to achieve a carbon-neutral future and hold societies together through the twenty-first century’s economic, political, and climate-related crises. To enable governments to do so, policymakers must avoid austerity measures and recruit high-quality personnel….Policymakers around the world will need to address a confluence of economic, political, and climate-related shocks in 2023. While governments cannot solve these crises alone, deft political leadership will be crucial to holding societies together and enabling communities and businesses to step up and do their part. What the world desperately needs is public servants and politicians who are willing and able to innovate…

In an ideal world, the magnitude of humanity’s current challenges would attract some of the most creative and highly motivated citizens to public service. In many countries, however, public-sector pay has sunk to levels that make it increasingly difficult to attract top talent. In the United Kingdom, as the Financial Times’ Martin Wolf notes, while overall real private-sector pay has increased by 5.5% since 2010, public-sector wages have fallen by 5.9%, with much of that decline happening in the last two years. The result is a personnel deficit at every level. Recent data from the National Health Service in England show a huge nurse shortfall. Other data show that teacher recruitments are well below target.

Too often, the public sector falls into a vicious cycle of cost-cutting and resignations. Nurses in the UK are overworked and many will likely succumb to exhaustion soon, leaving their remaining colleagues even more overburdened and demoralized. Another austerity wave will make it even harder to retain quality workers…(More)”.

Why we need to unlock health data to beat disease worldwide


Article by Takanori Fujita, Masayasu Okajima and Hiroyuki Miuchi: “The digital revolution in healthcare offers the promise of better health and longer lives for people around the world. New digital tools can help doctors and patients to predict, prevent and treat disease, opening the door to personalised medical care that is cost-efficient and highly effective.

Digitization across the entire healthcare sector — from hospital operations to the production of medical devices, vaccines and other pharmaceuticals — stands to benefit everyone, through improved efficiency at medical institutions, better care at home and stronger support for everyday health and wellbeing.

The essential ingredient in digital healthcare is data. Developers and service providers need health data to build and deploy effective solutions. So far, unfortunately, the potential benefits of digital healthcare have been under-realized, in large part because of data chokepoints…

It should go without saying that the ‘reward’ for sharing data is better health. Lifestyle-related diseases, which are more prevalent in ageing populations, often do not become symptomatic until they have progressed to a dangerous level. That makes timely monitoring and assessment crucial. In a world where people are living longer and longer— ‘100-year societies,’ as we say in Japan — data-enabled early detection is perhaps the best tool we have to stave off age-related health crises.

Abstract arguments, however, rarely convince people to consent to sharing personal data. Special efforts are needed to show specific, individual benefits and make people feel a tangible sense of control.

In Japan, the city of Arao is conducting an experiment to enable patients and their families to check information on electronic health records (EHRs) using their smartphones when they visit affiliated hospitals. Test results, prescribed medications and other information can be monitored. The system is expected to reduce costs for municipalities that are struggling to fund medical and nursing care for growing elderly populations. The money saved can be diverted to programs that help people live healthier lives, creating a virtuous cycle….Digital healthcare isn’t just a matter for patients and medical professionals. Lifestyle data with implications for health is broadly distributed, so the non-medical field needs to be involved as well. Takamatsu, another Japanese city, is attempting to address this difficult issue by building a common data collaboration infrastructure for the public and private sectors.

SOMPO Light Vortex, a subsidiary of SOMPO Holdings, a Japanese insurance and nursing care company, has created an app for Covid-19 vaccination certification and personal health records (PHRs) that is connected to Takamatsu’s municipal data infrastructure. Combining a range of data on health and lifestyle patterns in a trusted platform overseen by local government is expected to offer benefits in areas ranging from disaster prevention to wellbeing…(More)”.