WHO Launches Global Infectious Disease Surveillance Network


Article by Shania Kennedy: “The World Health Organization (WHO) launched the International Pathogen Surveillance Network (IPSN), a public health network to prevent and detect infectious disease threats before they become epidemics or pandemics.

IPSN will rely on insights generated from pathogen genomics, which helps analyze the genetic material of viruses, bacteria, and other disease-causing micro-organisms to determine how they spread and how infectious or deadly they may be.

Using these data, researchers can identify and track diseases to improve outbreak prevention, response, and treatments.

“The goal of this new network is ambitious, but it can also play a vital role in health security: to give every country access to pathogen genomic sequencing and analytics as part of its public health system,” said WHO Director-General Tedros Adhanom Ghebreyesus, PhD, in the press release.  “As was so clearly demonstrated to us during the COVID-19 pandemic, the world is stronger when it stands together to fight shared health threats.”

Genomics capacity worldwide was scaled up during the pandemic, but the press release indicates that many countries still lack effective tools and systems for public health data collection and analysis. This lack of resources and funding could slow the development of a strong global health surveillance infrastructure, which IPSN aims to help address.

The network will bring together experts in genomics and data analytics to optimize routine disease surveillance, including for COVID-19. According to the press release, pathogen genomics-based analyses of the SARS-COV-2 virus helped speed the development of effective vaccines and the identification of more transmissible virus variants…(More)”.

A Hiring Law Blazes a Path for A.I. Regulation


Article by Steve Lohr: “European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.

The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?

“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.

But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.

The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”…(More)” – See also AI Localism: Governing AI at the Local Level

Boston Isn’t Afraid of Generative AI


Article by Beth Simone Noveck: “After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good…(More)”.

The Social Side of Evidence-Based Policy


Comment by Adam Gamoran: “To Support Evidence-Based Policymaking, Bring Researchers and Policymakers Together,” by D. Max Crowley and J. Taylor Scott (Issues, Winter 2023), captures a simple truth: getting scientific evidence used in policy is about building relationships of trust between researchers and policymakers—the social side of evidence use. While the idea may seem obvious, it challenges prevailing notions of evidence-based policymaking, which typically rest on a logic akin to “if we build it, they will come.” In fact, the idea that producing high-quality evidence ensures its use is demonstrably false. Even when evidence is timely, relevant, and accessible, and even after researchers have filed their rigorous findings in a clearinghouse, the gap between evidence production and evidence use remains wide.

But how to build such relationships of trust? More than a decade of findings from research supported by the William T. Grant Foundation demonstrates the need for an infrastructure that supports evidence use. Such an infrastructure may involve new roles for staff within policy organizations to engage with research and researchers, as well as provision of resources that build their capacity to do so. For researchers, this infrastructure may involve committing to ongoing, mutual engagement with policymakers, in contrast with the traditional role of conveying written results or presenting findings without necessarily prioritizing policymakers’ concerns. Intermediary organizations such as funders and advocacy groups can play a key role in advancing the two-way streets through which researchers and policymakers can forge closer, more productive relationships…(More)”.

Citizens’ Assemblies Could Be Democracy’s Best Hope


Article by Hugh Pope: “…According to the OECD, nearly 600 citizens’ assemblies had taken place globally by 2021, almost all in the preceding decade. The number has expanded exponentially since then. In addition to high-profile assemblies that take on major issues, like the one in Paris, they include small citizens’ juries making local planning decisions, experiments that mix elected politicians with citizens chosen by lot, and permanent chambers in city or community governance whose members are randomly selected, usually on an annual basis from the relevant population.

Sortition, also known as democracy by lot, has been used to randomly select citizens’ assemblies in the Philippines, Malawi and Mexico. Citizens’ assemblies were used in the U.S. in 2021 to debate the climate crisis in Washington state and to determine the fate of a fairground in Petaluma, California. Indeed, whereas few people had heard of a citizens’ assembly a few years ago, a late 2020 Pew Research poll found that in the U.S., Germany, France and Britain, three-quarters or more of respondents thought it either somewhat or very important for their countries to convene them.

Though a global phenomenon, the trend is finding the most traction in Europe. Citizens’ assemblies in Germany are “booming,” with over 60 in the past year alone, according to a German radio documentary. A headline in Britain’s Guardian newspaper wondered if they are “the Future of Democracy.” The Dutch newspaper Trouw suggested they may be “the way we can win back trust in politics.” And in France, an editorial in Le Monde called for a greater embrace of “this new way of exercising power and drawing on collective intelligence.”…(More)”.

For chemists, the AI revolution has yet to happen


Editorial Team at Nature: “Many people are expressing fears that artificial intelligence (AI) has gone too far — or risks doing so. Take Geoffrey Hinton, a prominent figure in AI, who recently resigned from his position at Google, citing the desire to speak out about the technology’s potential risks to society and human well-being.

But against those big-picture concerns, in many areas of science you will hear a different frustration being expressed more quietly: that AI has not yet gone far enough. One of those areas is chemistry, for which machine-learning tools promise a revolution in the way researchers seek and synthesize useful new substances. But a wholesale revolution has yet to happen — because of the lack of data available to feed hungry AI systems.

Any AI system is only as good as the data it is trained on. These systems rely on what are called neural networks, which their developers teach using training data sets that must be large, reliable and free of bias. If chemists want to harness the full potential of generative-AI tools, they need to help to establish such training data sets. More data are needed — both experimental and simulated — including historical data and otherwise obscure knowledge, such as that from unsuccessful experiments. And researchers must ensure that the resulting information is accessible. This task is still very much a work in progress…(More)”.

The latest in homomorphic encryption: A game-changer shaping up


Article by Katharina Koerner: “Privacy professionals are witnessing a revolution in privacy technology. The emergence and maturing of new privacy-enhancing technologies that allow for data use and collaboration without sharing plain text data or sending data to a central location are part of this revolution.

The United Nations, the Organisation for Economic Co-operation and Development, the U.S. White House, the European Union Agency for Cybersecurity, the UK Royal Society, and Singapore’s media and privacy authorities all released reports, guidelines and regulatory sandboxes around the use of PETs in quick succession. We are in an era where there are high hopes for data insights to be leveraged for the public good while maintaining privacy principles and enhanced security.

A prominent example of a PET is fully homomorphic encryption, often mentioned in the same breath as differential privacy, federated learning, secure multiparty computation, private set intersection, synthetic data, zero knowledge proofs or trusted execution environments.

As FHE advances and becomes standardized, it has the potential to revolutionize the way we handle, protect and utilize personal data. Staying informed about the latest advancements in this field can help privacy pros prepare for the changes ahead in this rapidly evolving digital landscape.

Homomorphic encryption: A game changer?

FHE is a groundbreaking cryptographic technique that enables third parties to process information without revealing the data itself by running computations on encrypted data.

This technology can have far-reaching implications for secure data analytics. Requests to a databank can be answered without accessing its plain text data, as the analysis is conducted on data that remains encrypted. This adds a third layer of security for data when in use, along with protecting data at rest and in transit…(More)”.

China’s new AI rules protect people — and the Communist Party’s power


Article by Johanna M. Costigan: “In April, in an effort to regulate rapidly advancing artificial intelligence technologies, China’s internet watchdog introduced draft rules on generative AI. They cover a wide range of issues — from how data is trained to how users interact with generative AI such as chatbots. 

Under the new regulations, companies are ultimately responsible for the “legality” of the data they use to train AI models. Additionally, generative AI providers must not share personal data without permission, and must guarantee the “veracity, accuracy, objectivity, and diversity” of their pre-training data. 

These strict requirements by the Cyberspace Administration of China (CAC) for AI service providers could benefit Chinese users, granting them greater protections from private companies than many of their global peers. Article 11 of the regulations, for instance, prohibits providers from “conducting profiling” on the basis of information gained from users. Any Instagram user who has received targeted ads after their smartphone tracked their activity would stand to benefit from this additional level of privacy.  

Another example is Article 10 — it requires providers to employ “appropriate measures to prevent users from excessive reliance on generated content,” which could help prevent addiction to new technologies and increase user safety in the long run. As companion chatbots such as Replika become more popular, companies should be responsible for managing software to ensure safe use. While some view social chatbots as a cure for loneliness, depression, and social anxiety, they also present real risks to users who become reliant on them…(More)”.

As the Quantity of Data Explodes, Quality Matters


Article by Katherine Barrett and Richard Greene: “With advances in technology, governments across the world are increasingly using data to help inform their decision making. This has been one of the most important byproducts of the use of open data, which is “a philosophy- and increasingly a set of policies – that promotes transparency, accountability and value creation by making government data available to all,” according to the Organisation for Economic Co-operation and Development (OECD).

But as data has become ever more important to governments, the quality of that data has become an increasingly serious issue. A number of nations, including the United States, are taking steps to deal with it. For example, according to a study from Deloitte, “The Dutch government is raising the bar to enable better data quality and governance across the public sector.” In the same report, a case study about Finland states that “data needs to be shared at the right time and in the right way. It is also important to improve the quality and usability of government data to achieve the right goals.” And the United Kingdom has developed its Government Data Quality Hub to help public sector organizations “better identify their data challenges and opportunities and effectively plan targeted improvements.”

Our personal experience is with U.S. states and local governments, and in that arena the road toward higher quality data is a long and difficult one, particularly as the sheer quantity of data has grown exponentially. As things stand, based on our ongoing research into performance audits, it is clear that issues with data are impediments to the smooth process of state and local governments…(More)”.

Digital Anthropology Meets Data Science


Article by Katie Hillier: “Analyzing online ecosystems in real time, teams of anthropologists and data scientists can begin to understand rapid social changes as they happen.

Ask not what data science can do for anthropology, but what anthropology can do for data science. —Anders Kristian Munk, Why the World Needs Anthropologists Symposium 2022

In the last decade, emerging technologies, such as AI, immersive realities, and new and more addictive social networks, have permeated almost every aspect of our lives. These innovations are influencing how we form identities and belief systems. Social media influences the rise of subcultures on TikTok, the communications of extremist communities on Telegram, and the rapid spread of conspiracy theories that bounce around various online echo chambers. 

People with shared values or experiences can connect and form online cultures at unprecedented scales and speeds. But these new cultures are evolving and shifting faster than our current ability to understand them. 

To keep up with the depth and speed of online transformations, digital anthropologists are teaming up with data scientists to develop interdisciplinary methods and tools to bring the deep cultural context of anthropology to scales available only through data science—producing a surge in innovative methodologies for more effectively decoding online cultures in real time…(More)”.