How the algorithm tipped the balance in Ukraine


David Ignatius at The Washington Post: “Two Ukrainian military officers peer at a laptop computer operated by a Ukrainian technician using software provided by the American technology company Palantir. On the screen are detailed digital maps of the battlefield at Bakhmut in eastern Ukraine, overlaid with other targeting intelligence — most of it obtained from commercial satellites.

As we lean closer, we see can jagged trenches on the Bakhmut front, where Russian and Ukrainian forces are separated by a few hundred yards in one of the bloodiest battles of the war. A click of the computer mouse displays thermal images of Russian and Ukrainian artillery fire; another click shows a Russian tank marked with a “Z,” seen through a picket fence, an image uploaded by a Ukrainian spy on the ground.

If this were a working combat operations center, rather than a demonstration for a visiting journalist, the Ukrainian officers could use a targeting program to select a missile, artillery piece or armed drone to attack the Russian positions displayed on the screen. Then drones could confirm the strike, and a damage assessment would be fed back into the system.

This is the “wizard war” in the Ukraine conflict — a secret digital campaign that has never been reported before in detail — and it’s a big reason David is beating Goliath here. The Ukrainians are fusing their courageous fighting spirit with the most advanced intelligence and battle-management software ever seen in combat.

“Tenacity, will and harnessing the latest technology give the Ukrainians a decisive advantage,” Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, told me last week. “We are witnessing the ways wars will be fought, and won, for years to come.”

I think Milley is right about the transformational effect of technology on the Ukraine battlefield. And for me, here’s the bottom line: With these systems aiding brave Ukrainian troops, the Russians probably cannot win this war…(More)” See also Part 2.

The Protection and Promotion of Civic Space


OECD Report: “The past decade has seen increasing international recognition of civic space as a cornerstone of functioning democracies, alongside efforts to promote and protect it. Countries that foster civic space are better placed to reap the many benefits of higher levels of citizen engagement, strengthened transparency and accountability, and empowered citizens and civil society. In the longer term, a vibrant civic space can help to improve government effectiveness and responsiveness, contribute to more citizen-centred policies, and boost social cohesion. This first OECD comparative report on civic space offers a baseline of data from 33 OECD Members and 19 non-Members and a nuanced overview of the different dimensions of civic space, with a focus on civic freedoms, media freedoms, civic space in the digital age, and the enabling environment for civil society. It provides an exhaustive review of legal frameworks, policies, strategies, and institutional arrangements, in addition to implementation gaps, trends and good practices. The analysis is complemented by a review of international standards and guidance, in addition to data and analysis from civil society and other stakeholders…(More)”.

We need data infrastructure as well as data sharing – conflicts of interest in video game research


Article by David Zendle & Heather Wardle: “Industry data sharing has the potential to revolutionise evidence on video gaming and mental health, as well as a host of other critical topics. However, collaborative data sharing agreements between academics and industry partners may also afford industry enormous power in steering the development of this evidence base. In this paper, we outline how nonfinancial conflicts of interest may emerge when industry share data with academics. We then go on to describe ways in which such conflicts may affect the quality of the evidence base. Finally, we suggest strategies for mitigating this impact and preserving research independence. We focus on the development of data infrastructure: technological, social, and educational architecture that facilitates unfettered and free access to the kinds of high-quality data that industry hold, but without industry involvement…(More)”.

Use of new data sources for measuring international migration


UNICE Report: “Migration and other forms of cross-border mobility are issues of high policy importance. Demands for statistics in these areas have further increased in light of the 2030 Agenda for Sustainable Development and the 2018 Global Compact for Safe, Orderly and Regular Migration. The statistical community continues to be challenged to capture international migration and cross-border mobility in a way that would meet the growing needs of users.

Measurement of migration and cross-border mobility relies on a variety of sources, such as population and housing censuses, household surveys and administrative records, with each of them having their own strengths and limitations. Integration of data from different sources is often seen as a way to enhance the richness of data and reduce coverage or accuracy problems. Yet, even this would often not capture all dimensions of migration and cross-border mobility.

New non-conventional data sources, such as data gathered from the use of mobile telephones, credit cards and social networks — generally known as big and social media data — could be useful for producing migration statistics when used in combination with conventional sources. Notwithstanding the challenges of accessibility, accuracy and access to these new sources, examples are emerging that highlight their potential.

In 2020 the Bureau of the Conference of European Statisticians (CES) set up a task force to review existing experience and plans for using new data sources for measuring international migration in national statistical offices and outside official statistics; analyse the material collected; and compile the examples into a reference tool.

This publication presents the results of the work of the task force, including various national experiences with big data and new data sources collected through two surveys among countries participating in the CES…(More)”.

How AI That Powers Chatbots and Search Queries Could Discover New Drugs


Karen Hao at The Wall Street Journal: “In their search for new disease-fighting medicines, drug makers have long employed a laborious trial-and-error process to identify the right compounds. But what if artificial intelligence could predict the makeup of a new drug molecule the way Google figures out what you’re searching for, or email programs anticipate your replies—like “Got it, thanks”?

That’s the aim of a new approach that uses an AI technique known as natural language processing—​the same technology​ that enables OpenAI’s ChatGPT​ to ​generate human-like responses​—to analyze and synthesize proteins, which are the building blocks of life and of many drugs. The approach exploits the fact that biological codes have something in common with search queries and email texts: Both are represented by a series of letters.  

Proteins are made up of dozens to thousands of small chemical subunits known as amino acids, and scientists use special notation to document the sequences. With each amino acid corresponding to a single letter of the alphabet, proteins are represented as long, sentence-like combinations.

Natural language algorithms, which quickly analyze language and predict the next step in a conversation, can also be applied to this biological data to create protein-language models. The models encode what might be called the grammar of proteins—the rules that govern which amino acid combinations yield specific therapeutic properties—to predict the sequences of letters that could become the basis of new drug molecules. As a result, the time required for the early stages of drug discovery could shrink from years to months.

“Nature has provided us with tons of examples of proteins that have been designed exquisitely with a variety of functions,” says Ali Madani, founder of ProFluent Bio, a Berkeley, Calif.-based startup focused on language-based protein design. “We’re learning the blueprint from nature.”…(More)”.

Storytelling Will Save the Earth


Article by Bella Lack: “…The environmental crisis is one of overconsumption, carbon emissions, and corporate greed. But it’s also a crisis of miscommunication. For too long, hard data buried environmentalists in an echo-chamber, but in 2023, storytelling will finally enable a united global response to the environmental crisis. As this crisis worsens, we will stop communicating the climate crisis with facts and stats—instead we will use stories like Timothy’s.  

Unlike numbers or facts, stories can trigger an emotional response, harnessing the power of motivation, imagination, and personal values, which drive the most powerful and permanent forms of social change. For instance, in 2019, we all saw the images of Notre Dame cathedral erupting in flames. Three minutes after the fire began, images of the incident were being broadcast globally, eliciting an immediate response from world leaders. That same year, the Amazon forest also burned, spewing smoke that spread over 2,000 miles and burning over one and a half football fields of rain forest every minute of every day—it took three weeks for the mainstream media to report that story. Why did the burning of Notre Dame warrant such rapid responses globally, when the Amazon fires did not? Although it is just a beautiful assortment of limestone, lead, and wood, we attach personal significance to Notre Dame, because it has a story we know and can relate to. That is what propelled people to react to it, while the fact that the Amazon was on fire elicited nothing…(More)”.

Storytelling allows us to make sense of the world. 

Explore the first Open Science Indicators dataset


Article by Lauren Cadwallader, Lindsay Morton, and Iain Hrynaszkiewicz: “Open Science is on the rise. We can infer as much from the proliferation of Open Access publishing options; the steady upward trend in bioRxiv postings; the periodic rollout of new national, institutional, or funder policies. 

But what do we actually know about the day-to-day realities of Open Science practice? What are the norms? How do they vary across different research subject areas and regions? Are Open Science practices shifting over time? Where might the next opportunity lie and where do barriers to adoption persist? 

To even begin exploring these questions and others like them we need to establish a shared understanding of how we define and measure Open Science practices. We also need to understand the current state of adoption in order to track progress over time. That’s where the Open Science Indicators project comes in. PLOS conceptualized a framework for measuring Open Science practices according to the FAIR principles, and partnered with DataSeer to develop a set of numerical “indicators” linked to specific Open Science characteristics and behaviors observable in published research articles. Our very first dataset, now available for download at Figshare, focuses on three Open Science practices: data sharing, code sharing, and preprint posting…(More)”.

Going Digital to Advance Data Governance for Growth and Well-being


OECD Report: “Data are generated wherever digital technologies are deployed namely, in almost every part of modern life. Using these data can empower individuals, drive innovation, enable new digital products and improve policy making and public service delivery. But as data become more widely used across sectors and applications, the potential for misuse and harm also grows. To advance data governance for growth and well-being, this report advocates a holistic and coherent approach to data governance, domestically and across borders. It examines how data have emerged as a strategic asset, with the ability to transform lives and confer economic advantage. It explains how the unique characteristics of data can pose complex trade-offs and challenge policies that pre-date the data-driven era. This report provides new insights, evidence and analysis and outlines considerations for better data governance policies in the digital age…(More)”.

The Risks of Empowering “Citizen Data Scientists”


Article by Reid Blackman and Tamara Sipes: “New tools are enabling organizations to invite and leverage non-data scientists — say, domain data experts, team members very familiar with the business processes, or heads of various business units — to propel their AI efforts. There are advantages to empowering these internal “citizen data scientists,” but also risks. Organizations considering implementing these tools should take five steps: 1) provide ongoing education, 2) provide visibility into similar use cases throughout the organization, 3) create an expert mentor program, 4) have all projects verified by AI experts, and 5) provide resources for inspiration outside your organization…(More)”.

Design-led policy and governance in practice: a global perspective


Paper by Marzia Mortati, Louise Mullagh & Scott Schmidt: “Presently, the relationship between policy and design is very much open for debate as to how these two concepts differ, relate, and interact with one another. There exists very little agreement on their relational trajectory with one course, policy design, originating in the policy studies tradition while the other, design for policy, being founded in design studies. The Special Issue has paid particular attention to the upcoming area of research where design disciplines and policy studies are exploring new ways toward convergence. With a focus on design, the authors herein present an array of design methods and approaches through case studies and conceptual papers, using co-design, participatory design and critical service design to work with policymakers in tackling challenging issues and policies. We see designers and policymakers working with communities to boost engagement around the world, with examples from the UK, Latvia, New Zealand, Denmark, Turkey, the UK, Brazil and South Africa. Finally, we offer a few reflections to build further this research area pointing out topics for further research with the hope that these will be relevant for researchers approaching the field or deepening their investigation and for bridging the academic/practice divide between design studies and policy design…(More)”.