Surveys Provide Insight Into Three Factors That Encourage Open Data and Science


Article by Joshua Borycz, Alison Specht and Kevin Crowston: “Open Science is a game changer for researchers and the research community. The UNESCO Open Science recommendations in 2021 suggest that the practice of Open Science is a win-win for researchers as they gain from others’ work while making contributions, which in turn benefits the community, as transparency of conclusions and hence confidence in new knowledge improves.

Over a 10-year period Carol Tenopir of DataONE and her team conducted a global survey of scientists, managers and government workers involved in broad environmental science activities about their willingness to share data and their opinion of the resources available to do so (Tenopir et al., 2011201520182020). Comparing the responses over that time shows a general increase in the willingness to share data (and thus engage in open science).

A higher willingness to share data corresponded with a decrease in satisfaction with data sharing resources across nations.

The most surprising result was that a higher willingness to share data corresponded with a decrease in satisfaction with data sharing resources across nations (e.g., skills, tools, training) (Fig.1). That is, researchers who did not want to share data were satisfied with the available resources, and those that did want to share data were dissatisfied. Researchers appear to only discover that the tools are insufficient when they begin the hard work of engaging in open science practices. This indicates that a cultural shift in the attitudes of researchers needs to precede the development of support and tools for data management…(More)”.

Picture of a graph showing the correlation between the factors of willingness to share and satisfaction with resources for data sharing for six groups of nations.
Fig.1: Correlation between the factors of willingness to share and satisfaction with resources for data sharing for six groups of nations.

Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality


Paper by Fabrizio Dell’Acqua et al: “The public release of Large Language Models (LLMs) has sparked tremendous interest in how humans will use Artificial Intelligence (AI) to accomplish a variety of tasks. In our study conducted with Boston Consulting Group, a global management consulting firm, we examine the performance implications of AI on realistic, complex, and knowledge-intensive tasks. The pre-registered experiment involved 758 consultants comprising about 7% of the individual contributor-level consultants at the company. After establishing a performance baseline on a similar task, subjects were randomly assigned to one of three conditions: no AI access, GPT-4 AI access, or GPT-4 AI access with a prompt engineering overview. We suggest that the capabilities of AI create a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI. For each one of a set of 18 realistic consulting tasks within the frontier of AI capabilities, consultants using AI were significantly more productive (they completed 12.2% more tasks on average, and completed task 25.1% more quickly), and produced significantly higher quality results (more than 40% higher quality compared to a control group). Consultants across the skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores. For a task selected to be outside the frontier, however, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI. Further, our analysis shows the emergence of two distinctive patterns of successful AI use by humans along a spectrum of human-AI integration. One set of consultants acted as “Centaurs,” like the mythical halfhorse/half-human creature, dividing and delegating their solution-creation activities to the AI or to themselves. Another set of consultants acted more like “Cyborgs,” completely integrating their task flow with the AI and continually interacting with the technology…(More)”.

Citizens call for sufficiency and regulation — A comparison of European citizen assemblies and National Energy and Climate Plans


Paper by Jonas Lage et al: “There is a growing body of scientific evidence supporting sufficiency as an inevitable strategy for mitigating climate change. Despite this, sufficiency plays a minor role in existing climate and energy policies. Following previous work on the National Energy and Climate Plans of EU countries, we conduct a similar content analysis of the recommendations made by citizen assemblies on climate change mitigation in ten European countries and the EU, and compare the results of these studies. Citizen assemblies are representative mini-publics and enjoy a high level of legitimacy.

We identify a total of 860 mitigation policy recommendations in the citizen assemblies’ documents, of which 332 (39 %) include sufficiency. Most of the sufficiency policies relate to the mobility sector, the least relate to the buildings sector. Regulatory instruments are the most often proposed means for achieving sufficiency, followed by fiscal and economic instruments. The average approval rate of sufficiency policies is high (93 %), with the highest rates for regulatory policies.

Compared to National Energy and Climate Plans, the citizen assembly recommendations include a significantly higher share of sufficiency policies (factor three to six) with a stronger focus on regulatory policies. Consequently, the recommendations can be interpreted as a call for a sufficiency turn and a regulatory turn in climate mitigation politics. These results suggest that the observed lack of sufficiency in climate policy making is not due to a lack of legitimacy, but rather reflects a reluctance to implement sufficiency policies, the constitution of the policy making process and competing interests…(More)”.

Artificial intelligence in local governments: perceptions of city managers on prospects, constraints and choices


Paper by Tan Yigitcanlar, Duzgun Agdas & Kenan Degirmenci: “Highly sophisticated capabilities of artificial intelligence (AI) have skyrocketed its popularity across many industry sectors globally. The public sector is one of these. Many cities around the world are trying to position themselves as leaders of urban innovation through the development and deployment of AI systems. Likewise, increasing numbers of local government agencies are attempting to utilise AI technologies in their operations to deliver policy and generate efficiencies in highly uncertain and complex urban environments. While the popularity of AI is on the rise in urban policy circles, there is limited understanding and lack of empirical studies on the city manager perceptions concerning urban AI systems. Bridging this gap is the rationale of this study. The methodological approach adopted in this study is twofold. First, the study collects data through semi-structured interviews with city managers from Australia and the US. Then, the study analyses the data using the summative content analysis technique with two data analysis software. The analysis identifies the following themes and generates insights into local government services: AI adoption areas, cautionary areas, challenges, effects, impacts, knowledge basis, plans, preparedness, roadblocks, technologies, deployment timeframes, and usefulness. The study findings inform city managers in their efforts to deploy AI in their local government operations, and offer directions for prospective research…(More)”.

Data Commons


Paper by R. V. Guha et al: “Publicly available data from open sources (e.g., United States Census Bureau (Census), World Health Organization (WHO), Intergovernmental Panel on Climate Change (IPCC) are vital resources for policy makers, students and researchers across different disciplines. Combining data from different sources requires the user to reconcile the differences in schemas, formats, assumptions, and more. This data wrangling is time consuming, tedious and needs to be repeated by every user of the data. Our goal with Data Commons (DC) is to help make public data accessible and useful to those who want to understand this data and use it to solve societal challenges and opportunities. We do the data processing and make the processed data widely available via standard schemas and Cloud APIs. Data Commons is a distributed network of sites that publish data in a common schema and interoperate using the Data Commons APIs. Data from different Data Commons can be ‘joined’ easily. The aggregate of these Data Commons can be viewed as a single Knowledge Graph. This Knowledge Graph can then be searched over using Natural Language questions utilizing advances in Large Language Models. This paper describes the architecture of Data Commons, some of the major deployments and highlights directions for future work…(More)”.

Evidence-based policymaking in the legislatures


Blog by Ville Aula: “Evidence-based policymaking is a popular approach to policy that has received widespread public attention during the COVID-19 pandemic, as well as in the fight against climate change. It argues that policy choices based on rigorous, preferably scientific evidence should be given priority over choices based on other types of justification. However, delegating policymaking solely to researchers goes against the idea that policies are determined democratically.

In my recent article published in Policy & Politics: Evidence-based policymaking in the legislatures we explored the tension between politics and evidence in the national legislatures. While evidence-based policymaking has been extensively studied within governments, the legislative arena has received much less attention. The focus of the study was on understanding how legislators, legislative committees, and political parties together shape the use of evidence. We also wanted to explore how the interviewees understand timeliness and relevance of evidence, because lack of time is a key challenge within legislatures. The study is based on 39 interviews with legislators, party employees, and civil servants in Eduskunta, the national Parliament of Finland.

Our findings show that, in Finland, political parties play a key role in collecting, processing, and brokering evidence within legislatures. Finnish political parties maintain detailed policy programmes that guide their work in the legislature. The programmes are often based on extensive consultations with expert networks of the party and evidence collection from key stakeholders. Political parties are not ready to review these programmes every time new evidence is offered to them. This reluctance can give the appearance that parties do not want to follow evidence. Nevertheless, reluctance is oftens necessary for political parties to maintain stable policy platforms while navigating uncertainty amidst competing sources of evidence. Party positions can be based on extensive evidence and expertise even if some other sources of evidence contradict them.

Partisan expert networks and policy experts employed by political parties in particular appear to be crucial in formulating the evidence-base of policy programmes. The findings suggest that these groups should be a new target audience for evidence brokering. Yet political parties, their employees, and their networks have rarely been considered in research on evidence-based policymaking.

Turning to the question of timeliness we found, as expected, that use of evidence in the Parliament of Finland is driven by short-term reactiveness. However, in our study, we also found that short-term reactiveness and the notion of timeliness can refer to time windows ranging from months to weeks and, sometimes, merely days. The common recommendation by policy scholars to boost uptake of evidence by making it timely and relevant is therefore far from simple…(More)”.

The Adoption and Implementation of Artificial Intelligence Chatbots in Public Organizations: Evidence from U.S. State Governments


Paper by Tzuhao Chen, Mila Gascó-Hernandez, and Marc Esteve: “Although the use of artificial intelligence (AI) chatbots in public organizations has increased in recent years, three crucial gaps remain unresolved. First, little empirical evidence has been produced to examine the deployment of chatbots in government contexts. Second, existing research does not distinguish clearly between the drivers of adoption and the determinants of success and, therefore, between the stages of adoption and implementation. Third, most current research does not use a multidimensional perspective to understand the adoption and implementation of AI in government organizations. Our study addresses these gaps by exploring the following question: what determinants facilitate or impede the adoption and implementation of chatbots in the public sector? We answer this question by analyzing 22 state agencies across the U.S.A. that use chatbots. Our analysis identifies ease of use and relative advantage of chatbots, leadership and innovative culture, external shock, and individual past experiences as the main drivers of the decisions to adopt chatbots. Further, it shows that different types of determinants (such as knowledge-base creation and maintenance, technology skills and system crashes, human and financial resources, cross-agency interaction and communication, confidentiality and safety rules and regulations, and citizens’ expectations, and the COVID-19 crisis) impact differently the adoption and implementation processes and, therefore, determine the success of chatbots in a different manner. Future research could focus on the interaction among different types of determinants for both adoption and implementation, as well as on the role of specific stakeholders, such as IT vendors…(More)”.

Governing the informed city: examining local government strategies for information production, consumption and knowledge sharing across ten cities


Paper by Katrien Steenmans et al: “Cities are more and more embedded in information flows, and their policies are increasingly called assessment frameworks to understand the impact of the systems of knowledge underpinning local government. Encouraging a more systemic view on the data politics of the urban age, this paper investigates the information ecosystem in which local governments are embedded. Seeking to go beyond the ‘smart city’ paradigm into a more overt discussion of the structures of information-driven urban governance, it offers a preliminary assessment across ten case studies (Barcelona, Bogotá, Chicago, London, Medellín, Melbourne, Mexico City, Mumbai, Seoul and Warsaw). It illustrates how both internal and external actors to local government are deeply involved throughout information mobilization processes, though in different capacities and to different extents, and how the impact of many of these actors is still not commonly assessed and/or leveraged by cities. Seeking to encourage more systematic analysis the governance of knowledge collection, dissemination, analysis, and use in cities, the paper advocates for an ‘ecosystem’ view of the emerging ‘informed cities’ paradigm…(More)”.

Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models


Paper by Pengfei Li, Jianyi Yang, Mohammad A. Islam, Shaolei Ren: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.

Incentivising open ecological data using blockchain technology


Paper by Robert John Lewis, Kjell-Erik Marstein & John-Arvid Grytnes: “Mindsets concerning data as proprietary are common, especially where data production is resource intensive. Fears of competing research in concert with loss of exclusivity to hard earned data are pervasive. This is for good reason given that current reward structures in academia focus overwhelmingly on journal prestige and high publication counts, and not accredited publication of open datasets. And, then there exists reluctance of researchers to cede control to centralised repositories, citing concern over the lack of trust and transparency over the way complex data are used and interpreted.

To begin to resolve these cultural and sociological constraints to open data sharing, we as a community must recognise that top-down pressure from policy alone is unlikely to improve the state of ecological data availability and accessibility. Open data policy is almost ubiquitous (e.g. the Joint Data Archiving Policy, (JDAP) http://datadryad.org/pages/jdap) and while cyber-infrastructures are becoming increasingly extensive, most have coevolved with sub-disciplines utilising high velocity, born digital data (e.g. remote sensing, automated sensor networks and citizen science). Consequently, they do not always offer technological solutions that ease data collation, standardisation, management and analytics, nor provide a good fit culturally to research communities working among the long-tail of ecological science, i.e. science conducted by many individual researchers/teams over limited spatial and temporal scales. Given the majority of scientific funding is spent on this type of dispersed research, there is a surprisingly large disconnect between the vast majority of ecological science and the cyber-infrastructures to support open data mandates, offering a possible explanation to why primary ecological data are reportedly difficult to find…(More)”.