Closing the gap between user experience and policy design 


Article by Cecilia Muñoz & Nikki Zeichner: “..Ask the average American to use a government system, whether it’s for a simple task like replacing a Social Security Card or a complicated process like filing taxes, and you’re likely to be met with groans of dismay. We all know that government processes are cumbersome and frustrating; we have grown used to the government struggling to deliver even basic services. 

Unacceptable as the situation is, fixing government processes is a difficult task. Behind every exhausting government application form or eligibility screener lurks a complex policy that ultimately leads to what Atlantic staff writer Anne Lowrey calls the time tax, “a levy of paperwork, aggravation, and mental effort imposed on citizens in exchange for benefits that putatively exist to help them.” 

Policies are complex, in part because they each represent many voices. The people who we call policymakers are key actors in governments and elected officials at every level from city councils to the U.S. Congress. As they seek to solve public problems like child poverty or improving economic mobility, they consult with experts at government agencies, researchers in academia, and advocates working directly with affected communities. They also hear from lobbyists from affected industries. They consider current events and public sentiments. All of these voices and variables, representing different and sometimes conflicting interests, contribute to the policies that become law. And as a result, laws reflect a complex mix of objectives. After a new law is in place, relevant government agencies are responsible for implementing them by creating new programs and services to carry them out. Complex policies then get translated into complex processes and experiences for members of the public. They become long application forms, unclear directions, and too often, barriers that keep people from accessing a benefit. 

Policymakers and advocates typically declare victory when a new policy is signed into law; if they think about the implementation details at all, that work mostly happens after the ink is dry. While these policy actors may have deep expertise in a given issue area, or deep understanding of affected communities, they often lack experience designing services in a way that will be easy for the public to navigate…(More)”.

China just announced a new social credit law. Here’s what it means.


Article by Zeyi Yang: “It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced a six-year plan to build a system to reward actions that build trust in society and penalize the opposite, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

For most people outside China, the words “social credit system” conjure up an instant image: a Black Mirror–esque web of technologies that automatically score all Chinese citizens according to what they did right and wrong. But the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either. 

Instead, the system that the central government has been slowly working on is a mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values—however vague that last goal in particular sounds. There’s no evidence yet that this system has been abused for widespread social control (though it remains possible that it could be wielded to restrict individual rights). 

While local governments have been much more ambitious with their innovative regulations, causing more controversies and public pushback, the countrywide social credit system will still take a long time to materialize. And China is now closer than ever to defining what that system will look like. On November 14, several top government agencies collectively released a draft law on the Establishment of the Social Credit System, the first attempt to systematically codify past experiments on social credit and, theoretically, guide future implementation. 

Yet the draft law still left observers with more questions than answers. 

“This draft doesn’t reflect a major sea change at all,” says Jeremy Daum, a senior fellow of the Yale Law School Paul Tsai China Center who has been tracking China’s social credit experiment for years. It’s not a meaningful shift in strategy or objective, he says. 

Rather, the law stays close to local rules that Chinese cities like Shanghai have released and enforced in recent years on things like data collection and punishment methods—just giving them a stamp of central approval. It also doesn’t answer lingering questions that scholars have about the limitations of local rules. “This is largely incorporating what has been out there, to the point where it doesn’t really add a whole lot of value,” Daum adds. 

So what is China’s current system actually like? Do people really have social credit scores? Is there any truth to the image of artificial-intelligence-powered social control that dominates Western imagination? …(More)”.

A CERN Model for Studying the Information Environment


Article by Alicia Wanless: “After the Second World War, European science was suffering. Scientists were leaving Europe in pursuit of safety and work opportunities, among other reasons. To stem the exodus and unite the community around a vision of science for peace, in 1949, a transatlantic group of scholars proposed the creation of a world-class physics research facility in Europe. The grand vision was for this center to unlock the mysteries of the universe. Their white paper laid the foundation for the European Center for Nuclear Research (CERN), which today supports fundamental research in physics across an international community of more than 10,000 scientists from twenty-three member states and more than seventy other nations. Together, researchers at CERN built cutting-edge instruments to observe dozens of subatomic particles for the first time. And along the way they invented the World Wide Web, which was originally conceived as a tool to empower CERN’s distributed teams.

Such large-scale collaboration is once again needed to connect scholars, policymakers, and practitioners internationally and to accelerate research, this time to unlock the mysteries of the information environment. Democracies around the world are grappling with how to safeguard democratic values against online abuse, the proliferation of illiberal and xenophobic narratives, malign interference, and a host of other challenges related to a rapidly evolving information environment. What are the conditions within the information environment that can foster democratic societies and encourage active citizen participation? Sadly, the evidence needed to guide policymaking and social action in this domain is sorely lacking.

Researchers, governments, and civil society must come together to help. This paper explores how CERN can serve as a model for developing the Institute for Research on the Information Environment (IRIE). By connecting disciplines and providing shared engineering resources and capacity-building across the world’s democracies, IRIE will scale up applied research to enable evidence-based policymaking and implementation…(More)”.

We could run out of data to train AI language programs 


Article by Tammy Xu: “Large language models are one of the hottest areas of AI research right now, with companies racing to release programs like GPT-3 that can write impressively coherent articles and even computer code. But there’s a problem looming on the horizon, according to a team of AI forecasters: we might run out of data to train them on.

Language models are trained using texts from sources like Wikipedia, news articles, scientific papers, and books. In recent years, the trend has been to train these models on more and more data in the hope that it’ll make them more accurate and versatile.

The trouble is, the types of data typically used for training language models may be used up in the near future—as early as 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization, that is yet to be peer reviewed. The issue stems from the fact that, as researchers build more powerful models with greater capabilities, they have to find ever more texts to train them on. Large language model researchers are increasingly concerned that they are going to run out of this sort of data, says Teven Le Scao, a researcher at AI company Hugging Face, who was not involved in Epoch’s work.

The issue stems partly from the fact that language AI researchers filter the data they use to train models into two categories: high quality and low quality. The line between the two categories can be fuzzy, says Pablo Villalobos, a staff researcher at Epoch and the lead author of the paper, but text from the former is viewed as better-written and is often produced by professional writers…(More)”.

How many yottabytes in a quettabyte? Extreme numbers get new names


Article by Elizabeth Gibney: “By the 2030s, the world will generate around a yottabyte of data per year — that’s 1024 bytes, or the amount that would fit on DVDs stacked all the way to Mars. Now, the booming growth of the data sphere has prompted the governors of the metric system to agree on new prefixes beyond that magnitude, to describe the outrageously big and small.

Representatives from governments worldwide, meeting at the General Conference on Weights and Measures (CGPM) outside Paris on 18 November, voted to introduce four new prefixes to the International System of Units (SI) with immediate effect. The prefixes ronna and quetta represent 1027 and 1030, and ronto and quecto signify 10−27 and 10−30. Earth weighs around one ronnagram, and an electron’s mass is about one quectogram.

This is the first update to the prefix system since 1991, when the organization added zetta (1021), zepto (1021), yotta (1024) and yocto (10−24). In that case, metrologists were adapting to fit the needs of chemists, who wanted a way to express SI units on the scale of Avogadro’s number — the 6 × 1023 units in a mole, a measure of the quantity of substances. The more familiar prefixes peta and exa were added in 1975 (see ‘Extreme figures’).

Extreme figures

Advances in scientific fields have led to increasing need for prefixes to describe very large and very small numbers.

FactorNameSymbolAdopted
1030quettaQ2022
1027ronnaR2022
1024yottaY1991
1021zettaZ1991
1018exaE1975
1015petaP1975
10−15femtof1964
10−18attoa1964
10−21zeptoz1991
10−24yoctoy1991
10−27rontor2022
10−30quectoq2022

Prefixes are agreed at the General Conference on Weights and Measures.

Today, the driver is data science, says Richard Brown, a metrologist at the UK National Physical Laboratory in Teddington. He has been working on plans to introduce the latest prefixes for five years, and presented the proposal to the CGPM on 17 November. With the annual volume of data generated globally having already hit zettabytes, informal suggestions for 1027 — including ‘hella’ and ‘bronto’ — were starting to take hold, he says. Google’s unit converter, for example, already tells users that 1,000 yottabytes is 1 hellabyte, and at least one UK government website quotes brontobyte as the correct term….(More)”

Institutions, Experts & the Loss of Trust


Essay by Henry E. Brady and Kay Lehman Schlozman: “Institutions are critical to our personal and societal well-being. They develop and disseminate knowledge, enforce the law, keep us healthy, shape labor relations, and uphold social and religious norms. But institutions and the people who lead them cannot fulfill their missions if they have lost legitimacy in the eyes of the people they are meant to serve.

Americans’ distrust of Congress is long-standing. What is less well-documented is how partisan polarization now aligns with the growing distrust of institutions once thought of as nonpolitical. Refusals to follow public health guidance about COVID-19, calls to defund the police, the rejection of election results, and disbelief of the press highlight the growing polarization of trust. But can these relationships be broken? And how does the polarization of trust affect institutions’ ability to confront shared problems, like climate change, epidemics, and economic collapse?…(More)”.

Humanizing Science and Engineering for the Twenty-First Century


Essay by Kaye Husbands Fealing, Aubrey Deveny Incorvaia and Richard Utz: “Solving complex problems is never a purely technical or scientific matter. When science or technology advances, insights and innovations must be carefully communicated to policymakers and the public. Moreover, scientists, engineers, and technologists must draw on subject matter expertise in other domains to understand the full magnitude of the problems they seek to solve. And interdisciplinary awareness is essential to ensure that taxpayer-funded policy and research are efficient and equitable and are accountable to citizens at large—including members of traditionally marginalized communities…(More)”.

Science and the World Cup: how big data is transforming football


Essay by David Adam: “The scowl on Cristiano Ronaldo’s face made international headlines last month when the Portuguese superstar was pulled from a match between Manchester United and Newcastle with 18 minutes left to play. But he’s not alone in his sentiment. Few footballers agree with a manager’s decision to substitute them in favour of a fresh replacement.

During the upcoming football World Cup tournament in Qatar, players will have a more evidence-based way to argue for time on the pitch. Within minutes of the final whistle, tournament organizers will send each player a detailed breakdown of their performance. Strikers will be able to show how often they made a run and were ignored. Defenders will have data on how much they hassled and harried the opposing team when it had possession.

It’s the latest incursion of numbers into the beautiful game. Data analysis now helps to steer everything from player transfers and the intensity of training, to targeting opponents and recommending the best direction to kick the ball at any point on the pitch.

Meanwhile, footballers face the kind of data scrutiny more often associated with an astronaut. Wearable vests and straps can now sense motion, track position with GPS and count the number of shots taken with each foot. Cameras at multiple angles capture everything from headers won to how long players keep the ball. And to make sense of this information, most elite football teams now employ data analysts, including mathematicians, data scientists and physicists plucked from top companies and labs such as computing giant Microsoft and CERN, Europe’s particle-physics laboratory near Geneva, Switzerland….(More)”.

The network science of collective intelligence


Article by Damon Centola: “In the last few years, breakthroughs in computational and experimental techniques have produced several key discoveries in the science of networks and human collective intelligence. This review presents the latest scientific findings from two key fields of research: collective problem-solving and the wisdom of the crowd. I demonstrate the core theoretical tensions separating these research traditions and show how recent findings offer a new synthesis for understanding how network dynamics alter collective intelligence, both positively and negatively. I conclude by highlighting current theoretical problems at the forefront of research on networked collective intelligence, as well as vital public policy challenges that require new research efforts…(More)”.

How government can capitalise on a revolution in data sharing


Article by Alison Pritchard: “A watershed moment in the culture of data sharing, the pandemic has led to the use of linked data increasingly becoming standard practice. From linking census and NHS data to track the virus’s impact among minority ethnic groups, to the linking of timely local data sources to support local authorities’ responses, the value of sharing data across boundaries was self-evident. 

Using data to inform a multidisciplinary pandemic response accelerated our longstanding work on data capability. To continue this progress, there is now a need to make government data more organised, easier to access, and integrated for use. Our learning has guided the development of a new cloud-based platform that will ensure that anonymised data about our society and economy are now linked and accessible for vital research and decision-making in the UK.

The idea of sharing data to maximise impact isn’t new to us at the ONS – we’ve been doing this successfully for over 15 years through our well-respected Secure Research Service (SRS). The new Integrated Data Service (IDS) is the next step in this data-sharing journey, where, in a far more advanced form, government will have the ability to work with data at source – in a safe and secure environment – rather than moving data around, which currently creates friction and significant cost. The service, being compliant with the Digital Economy Act, opens up opportunities to capitalise on the often-underutilised research elements of that key legislation.

The launch of the full IDS in the spring of 2023 will see ready-to-use datasets made available to cross-government teams and wider research communities, enabling them to securely share, link and access them for vital research. The service is a collaboration among institutions to work on projects that shed light on some of the big challenges of the day, and to provide the ability to answer questions that we don’t yet know we need to answer…(More)”.