Report by the National Academies: “With the proliferation of automated vehicle location (AVL), automated passenger counters (APCs), and automated fare collection (AFC), transit agencies are collecting increasingly granular data on service performance, ridership, customer behavior, and financial recovery. While granular intelligent transportation systems (ITS) data can meaningfully improve transit decision-making, transit agencies face many challenges in accessing, validating, storing, and analyzing these data sets. These challenges are made more difficult in that the tools for managing and analyzing transit ITS data generally cannot, at this point, be shared across transit agencies because of variation in data collection systems and data formats. Multiple vendors provide ITS hardware and software, and data formats vary by vendor. Moreover, agencies may employ a patchwork of ITS that has been acquired and modified over time, leading to further consistency challenges.
Standardization of data structures and tools can help address these challenges. Not only can standardization streamline data transfer, validation, and database structuring, it encourages the development of analysis tools that can be used across transit agencies, as has been the case with route and schedule data, standardized in the General Transit Feed Specification (GTFS) format..(More)”.
Democracy by Design: Perspectives for Digitally Assisted, Participatory Upgrades of Society
Paper by Dirk Helbing et al: “The technological revolution, particularly the availability of more data and more powerful computational tools, has led to the emergence of a new scientific area called Computational Diplomacy. Our work focuses on a popular subarea of it. In recent years, there has been a surge of interest in using digital technologies to promote more participatory forms of democracy. While there are numerous potential benefits to using digital tools to enhance democracy, significant challenges must be addressed. It is essential to ensure that digital technologies are used in an accessible, equitable, and fair manner rather than reinforcing existing power imbalances. This paper investigates how digital tools can be used to help design more democratic societies by investigating three key research areas: (1) the role of digital technologies in facilitating civic engagement in collective decision-making; (2) the use of digital tools to improve transparency and accountability in gover-nance; and (3) the potential for digital technologies to enable the formation of more inclusive and representative democracies. We argue that more research on how digital technologies can be used to support democracy upgrade is needed, and we make some recommendations for future research in this direction…(More)”.
How Technology Companies Are Shaping the Ukraine Conflict
Article by Abishur Prakash: “Earlier this year, Meta, the company that owns Facebook and Instagram, announced that people could create posts calling for violence against Russia on its social media platforms. This was unprecedented. One of the world’s largest technology firms very publicly picked sides in a geopolitical conflict. Russia was now not just fighting a country but also multinational companies with financial stakes in the outcome. In response, Russia announced a ban on Instagram within its borders. The fallout was significant. The ban, which eventually included Facebook, cost Meta close to $2 billion.
Through the war in Ukraine, technology companies are showing how their decisions can affect geopolitics, which is a massive shift from the past. Technology companies have been either dragged into conflicts because of how customers were using their services (e.g., people putting their houses in the West Bank on Airbnb) or have followed the foreign policy of governments (e.g., SpaceX supplying Internet to Iran after the United States removed some sanctions)…(More)”.
Artificial intelligence in government: Concepts, standards, and a unified framework
Paper by Vincent J. Straub, Deborah Morgan, Jonathan Bright, Helen Margetts: “Recent advances in artificial intelligence (AI) and machine learning (ML) hold the promise of improving government. Given the advanced capabilities of AI applications, it is critical that these are embedded using standard operational procedures, clear epistemic criteria, and behave in alignment with the normative expectations of society. Scholars in multiple domains have subsequently begun to conceptualize the different forms that AI systems may take, highlighting both their potential benefits and pitfalls. However, the literature remains fragmented, with researchers in social science disciplines like public administration and political science, and the fast-moving fields of AI, ML, and robotics, all developing concepts in relative isolation. Although there are calls to formalize the emerging study of AI in government, a balanced account that captures the full breadth of theoretical perspectives needed to understand the consequences of embedding AI into a public sector context is lacking. Here, we unify efforts across social and technical disciplines by using concept mapping to identify 107 different terms used in the multidisciplinary study of AI. We inductively sort these into three distinct semantic groups, which we label the (a) operational, (b) epistemic, and (c) normative domains. We then build on the results of this mapping exercise by proposing three new multifaceted concepts to study AI-based systems for government (AI-GOV) in an integrated, forward-looking way, which we call (1) operational fitness, (2) epistemic completeness, and (3) normative salience. Finally, we put these concepts to work by using them as dimensions in a conceptual typology of AI-GOV and connecting each with emerging AI technical measurement standards to encourage operationalization, foster cross-disciplinary dialogue, and stimulate debate among those aiming to reshape public administration with AI…(More)”.
Computing the News: Data Journalism and the Search for Objectivity
Book by Sylvain Parasie: “…examines how data journalists and news organizations have navigated the tensions between traditional journalistic values and new technologies. Offering an in-depth analysis of how computing has become part of the daily practices of journalists, this book proposes ways for journalism to evolve in order to serve democratic societies…(More)”.
Democratised and declassified: the era of social media war is here
Essay by David V. Gioe & Ken Stolworthy: “In October 1962, Adlai Stevenson, US ambassador to the United Nations, grilled Soviet Ambassador Valerian Zorin about whether the Soviet Union had deployed nuclear-capable missiles to Cuba. While Zorin waffled (and didn’t know in any case), Stevenson went in for the kill: ‘I am prepared to wait for an answer until Hell freezes over… I am also prepared to present the evidence in this room.’ Stevenson then theatrically revealed several poster-sized photographs from a US U-2 spy plane, showing Soviet missile bases in Cuba, directly contradicting Soviet claims to the contrary. It was the first time that (formerly classified) imagery intelligence (IMINT) had been marshalled as evidence to publicly refute another state in high-stakes diplomacy, but it also revealed the capabilities of US intelligence collection to a stunned audience.
During the Cuban missile crisis — and indeed until the end of the Cold War — such exquisite airborne and satellite collection was exclusively the purview of the US, UK and USSR. The world (and the world of intelligence) has come a long way in the past 60 years. By the time President Putin launched his ‘special military operation’ in Ukraine in late February 2022, IMINT and geospatial intelligence (GEOINT) was already highly democratised. Commercial satellite companies, such as Maxar or Google Earth, provide high resolution images free of charge. Thanks to such ubiquitous imagery online, anyone could see – in remarkable clarity – that the Russian military was massing on Ukraine’s border. Geolocation stamped photos and user generated videos uploaded to social media platforms, such as Telegram or TikTok, enabled further refinement of – and confidence in – the view of Russian military activity. And continued citizen collection showed a change in Russian positions over time without waiting for another satellite to pass over the area. Of course, such a show of force was not guaranteed to presage an invasion, but there was no hiding the composition and scale of the build-up.
Once the Russians actually invaded, there was another key development – the democratisation of near real-time battlefield awareness. In a digitally connected context, everyone can be a sensor or intelligence collector, wittingly or unwittingly. This dispersed and crowd-sourced collection against the Russian campaign was based on the huge number of people taking pictures of Russian military equipment and formations in Ukraine and posting them online. These average citizens likely had no idea what exactly they were snapping a picture of, but established military experts on the internet do. Sometimes within minutes, internet platforms such as Twitter had threads and threads of what the pictures were, and what they revealed, providing what intelligence professionals call Russian ‘order of battle’…(More)”.
Could an algorithm predict the next pandemic?
Article by Simon Makin: “Leap is a machine-learning algorithm that uses sequence data to classify influenza viruses as either avian or human. The model had been trained on a huge number of influenza genomes — including examples of H5N8 — to learn the differences between those that infect people and those that infect birds. But the model had never seen an H5N8 virus categorized as human, and Carlson was curious to see what it made of this new subtype.
Somewhat surprisingly, the model identified it as human with 99.7% confidence. Rather than simply reiterating patterns in its training data, such as the fact that H5N8 viruses do not typically infect people, the model seemed to have inferred some biological signature of compatibility with humans. “It’s stunning that the model worked,” says Carlson. “But it’s one data point; it would be more stunning if I could do it a thousand more times.”
The zoonotic process of viruses jumping from wildlife to people causes most pandemics. As climate change and human encroachment on animal habitats increase the frequency of these events, understanding zoonoses is crucial to efforts to prevent pandemics, or at least to be better prepared.
Researchers estimate that around 1% of the mammalian viruses on the planet have been identified1, so some scientists have attempted to expand our knowledge of this global virome by sampling wildlife. This is a huge task, but over the past decade or so, a new discipline has emerged — one in which researchers use statistical models and machine learning to predict aspects of disease emergence, such as global hotspots, likely animal hosts or the ability of a particular virus to infect humans. Advocates of such ‘zoonotic risk prediction’ technology argue that it will allow us to better target surveillance to the right areas and situations, and guide the development of vaccines and therapeutics that are most likely to be needed.
However, some researchers are sceptical of the ability of predictive technology to cope with the scale and ever-changing nature of the virome. Efforts to improve the models and the data they rely on are under way, but these tools will need to be a part of a broader effort if they are to mitigate future pandemics…(More)”.
Avert Bangladesh’s looming water crisis through open science and better data
Article by Augusto Getirana et al: “Access to data is a huge problem. Bangladesh collects a large amount of hydrological data, such as for stream flow, surface and groundwater levels, precipitation, water quality and water consumption. But these data are not readily available: researchers must seek out officials individually to gain access. India’s hydrological data can be similarly hard to obtain, preventing downstream Bangladesh from accurately predicting flows into its rivers.
Bilateral scientific collaboration between Bangladesh and water-sharing nations, including India, Nepal, Bhutan and China, would be mutually beneficial. The decades-long Mekong River Commission between Cambodia, Laos, Thailand and Vietnam is one successful transboundary agreement that could serve as a model.
Publishing hydrological data in an open-access database would be an exciting step. For now, however, the logistics, funding and politics to make on-the-ground data publicly available are likely to remain out of reach.
Fortunately, satellite data can help to fill the gaps. Current Earth-observing satellite missions, such as the Gravity Recovery and Climate Experiment (GRACE) Follow-On, the Global Precipitation Measurement (GPM) network, multiple radar altimeters and the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors make data freely available and can provide an overall picture of water availability across the country (this is what we used in many of our analyses). The picture is soon to improve. In December, NASA and CNES, France’s space agency, plan to launch the Surface Water and Ocean Topography (SWOT) satellite mission. SWOT will provide unprecedented information on global ocean and inland surface waters at fine spatial resolution, allowing for much more detailed monitoring of water levels than is possible today. The international scientific community has been working hard over the past 15 years to get ready to store, process and use SWOT data.
New open-science initiatives, particularly NASA’s Earth Information System, launched in 2021, can help by supporting the development of customized data-analysis and modelling tools (see go.nature.com/3cffbh9). The data we present here were acquired in this framework. We are currently working on an advanced hydrological model that will be capable of representing climate-change effects and human impacts on Bangladesh’s water availability. We expect that the co-development of such a modelling system with local partners will support decision-making.
SERVIR, a joint programme of NASA and the US Agency for International Development that focuses on capacity-building, could also help improve forecasting of severe weather for Bangladesh, for example. This could improve the flood monitoring and forecast system operated by the Bangladesh Water Development Board, which is limited in geographical scope — flooding is monitored only at specific locations, not across the country. Such efforts will help with short-term adaptation and emergency responses to flood conditions, and with long-term planning for infrastructure…(More)”.
Tech-Fuelled Inequality Could Catalyze Populism 2.0
Article by Kyle Hiebert: “Geopolitical crises, looming climate chaos and the relentless expansion of surveillance capitalism are driving the development of the technologies of tomorrow — artificial intelligence (AI), semiconductors, green energy, big data, advanced robotics, virtual and augmented reality, nanotechnology, quantum computing, the Internet of Things and more.
Each of these tools holds tremendous potential to improve lives and help solve the world’s biggest problems. But technological change always produces winners and losers by giving rise to new concentrations of power and novel forms of inequality. For example, a study by the US National Bureau of Economic Research found that between 50 and 70 percent of lost wages in America from 1980 to 2016 stemmed from automation — far more than from aggressive offshoring or the withering of labour unions.
However, automation has brought about new occupations as well, evidenced by the absence of mass joblessness in the United States over the same period. Despite the lost wages, the US unemployment rate from 1993 to 2019 stayed around or below seven percent when excluding a four-year recovery following the exogenous shock of the 2008 financial crisis. Productivity has also increased by nearly 62 percent since 1979. But the wages of American workers have failed to keep pace, rising only 17.5 percent, signalling a deep disconnect in recent history between new technologies being adopted and the average worker being better off as a result.
Similar circumstances across the democratic world have provoked severe political consequences over the past decade. Disaffected populations caught on the wrong side of economic transformations and alienated by the accompanying social changes spurred by globalization have aligned themselves with populists pitching simplistic solutions to complex problems. Polarization has skyrocketed; international cooperation has frayed.
As the so-called Fourth Industrial Revolution accelerates in coming years, upending how economies and societies operate, a new form of populism rooted in tech-fuelled disparities may eventually consume democratic nations. To avoid this outcome, governments must get serious about harnessing new technologies to make the democratic process more agile and responsive to voters’ frustrations…(More)”
Can Social Media Rhetoric Incite Hate Incidents? Evidence from Trump’s “Chinese Virus” Tweets
Paper by Andy Cao, Jason M. Lindo & Jiee Zhong: “We will investigate whether Donald Trump’s “Chinese Virus” tweets contributed to the rise of anti-Asian incidents. We find that the number of incidents spiked following Trump’s initial “Chinese Virus” tweets and the subsequent dramatic rise in internet search activity for the phrase. Difference-in-differences and event-study analyses leveraging spatial variation indicate that this spike in anti-Asian incidents was significantly more pronounced in counties that supported Donald Trump in the 2016 presidential election relative to those that supported Hillary Clinton. We estimate that anti-Asian incidents spiked by 4000 percent in Trump-supporting counties, over and above the spike observed in Clinton-supporting counties…(More)”.