Incentivising open ecological data using blockchain technology


Paper by Robert John Lewis, Kjell-Erik Marstein & John-Arvid Grytnes: “Mindsets concerning data as proprietary are common, especially where data production is resource intensive. Fears of competing research in concert with loss of exclusivity to hard earned data are pervasive. This is for good reason given that current reward structures in academia focus overwhelmingly on journal prestige and high publication counts, and not accredited publication of open datasets. And, then there exists reluctance of researchers to cede control to centralised repositories, citing concern over the lack of trust and transparency over the way complex data are used and interpreted.

To begin to resolve these cultural and sociological constraints to open data sharing, we as a community must recognise that top-down pressure from policy alone is unlikely to improve the state of ecological data availability and accessibility. Open data policy is almost ubiquitous (e.g. the Joint Data Archiving Policy, (JDAP) http://datadryad.org/pages/jdap) and while cyber-infrastructures are becoming increasingly extensive, most have coevolved with sub-disciplines utilising high velocity, born digital data (e.g. remote sensing, automated sensor networks and citizen science). Consequently, they do not always offer technological solutions that ease data collation, standardisation, management and analytics, nor provide a good fit culturally to research communities working among the long-tail of ecological science, i.e. science conducted by many individual researchers/teams over limited spatial and temporal scales. Given the majority of scientific funding is spent on this type of dispersed research, there is a surprisingly large disconnect between the vast majority of ecological science and the cyber-infrastructures to support open data mandates, offering a possible explanation to why primary ecological data are reportedly difficult to find…(More)”.

The Importance of Purchase to Plate Data


Blog by Andrea Carlson and Thea Palmer Zimmerman: “…Because there can be economic and social barriers to maintaining a healthy diet, USDA promotes Food and Nutrition Security so that everyone has consistent and equitable access to healthy, safe, and affordable foods that promote optimal health and well-being. A set of data tools called the Purchase to Plate Suite (PPS) supports these goals by enabling the update of the Thrifty Food Plan (TFP), which estimates how much a budget-conscious family of four needs to spend on groceries to ensure a healthy diet. The TFP market basket – consisting of the specific amounts of various food categories required by the plan – forms the basis of the maximum allotment for the Supplemental Nutrition Assistance Program (SNAP, formerly known as the “Food Stamps” program), which provided financial support towards the cost of groceries for over 41 million individuals in almost 22 million households in fiscal year 2022.

The 2018 Farm Act (Agriculture Improvement Act of 2018) requires that USDA reevaluate the TFP every five years using current food composition, consumption patterns, dietary guidance, and food prices, and using approved scientific methods. USDA’s Economic Research Service (ERS) was charged with estimating the current food prices using retail food scanner data (Levin et al. 2018Muth et al. 2016) and utilized the PPS for this task. The most recent TFP update was released in August 2021 and the revised cost of the market basket was the first non-inflation adjustment increase in benefits for SNAP in over 40 years (US Department of Agriculture 2021).

The PPS combines datasets to enhance research related to the economics of food and nutrition. There are four primary components of the suite:

  • Purchase to Plate Crosswalk (PPC),
  • Purchase to Plate Price Tool (PPPT),
  • Purchase to Plate National Average Prices (PP-NAP) for the National Health and Nutrition Examination Survey (NHANES), and
  • Purchase to Plate Ingredient Tool (PPIT)..(More)”

Private sector access to public sector personal data: exploring data value and benefit sharing


Literature review for the Scottish Government: “The aim of this review is to enable the Scottish Government to explore the issues relevant to the access of public sector personal data (as defined by the European Union General Data Protection Regulation, GDPR) with or by the private sector in publicly trusted ways, to unlock the public benefit of this data. This literature review will specifically enable the Scottish Government to establish whether there are

(I) models/approaches of costs/benefits/data value/benefit-sharing, and

(II) intellectual property rights or royalties schemes regarding the use of public sector personal data with or by the private sector both in the UK and internationally.

In conducting this literature review, we used an adapted systematic review, and undertook thematic analysis of the included literature to answer several questions central to the aim of this research. Such questions included:

  • Are there any models of costs and/or benefits regarding the use of public sector personal data with or by the private sector?
  • Are there any models of valuing data regarding the use of public sector personal data with or by the private sector?
  • Are there any models for benefit-sharing in respect of the use of public sector personal data with or by the private sector?
  • Are there any models in respect of the use of intellectual property rights or royalties regarding the use of public sector personal data with or by the private sector?..(More)”.

Integrating AI into Urban Planning Workflows: Democracy Over Authoritarianism


Essay by Tyler Hinkle: “As AI tools become integrated into urban planning, a dual narrative of promise and potential pitfalls emerges. These tools offer unprecedented efficiency, creativity, and data analysis, yet if not guided by ethical considerations, they could inadvertently lead to exclusion, manipulation, and surveillance.

While AI, exemplified by tools like NovelAI, holds the potential to aggregate and synthesize public input, there’s a risk of suppressing genuine human voices in favor of algorithmic consensus. This could create a future urban landscape devoid of cultural depth and diversity, echoing historical authoritarianism.

In a potential dystopian scenario, an AI-based planning software gains access to all smart city devices, amassing data to reshape communities without consulting their residents. This data-driven transformation, devoid of human input, risks eroding the essence of community identity, autonomy, and shared decision-making. Imagine AI altering traffic flow, adjusting public transportation routes, or even redesigning public spaces based solely on data patterns, disregarding the unique needs and desires of the people who call that community home.

However, an optimistic approach guided by ethical principles can pave the way for a brighter future. Integrating AI with democratic ideals, akin to Fishkin’s deliberative democracy, can amplify citizens’ voices rather than replacing them. AI-driven deliberation can become a powerful vehicle for community engagement, transforming Arnstein’s ladder of citizen participation into a true instrument of empowerment. In addition, echoing the calls for alignment to be addresses holistically for AI, there will be alignment issues with AI as it becomes integrated into urban planning. We must take the time to ensure AI is properly aligned so it is a tool to help communities and not hurt them.

By treading carefully and embedding ethical considerations at the core, we can unleash AI’s potential to construct communities that are efficient, diverse, and resilient, while ensuring that democratic values remain paramount…(More)”.

What can harnessing ‘positive deviance’ methods do for food security?


Article by Katrina J. Lane: “What the researchers identified in Niger, in this case, is known as “positive deviance”. It’s a concept that originated in 1991 during a nutrition program in Vietnam run by Save the Children. Instead of focusing on the population level, project managers studied outliers in the system — children who were healthier than their peers despite sharing similar circumstances, and then looked at what the parents of these children did differently.

Once the beneficial practices were identified — in this case, that included collecting wild foods, such as crab, shrimp, and sweet potato tops for their children — they encouraged mothers to tell other parents. Through this outlier-centric approach, the project was able to reduce malnourishment by 74%.

“The positive deviance approach assumes that in every community there are individuals or groups that develop uncommon behaviors or practices which help them cope better with the challenges they face than their peers,” said Boy.

It’s important to be respectful and acknowledge success stories already present in systems, added Duncan Green, a strategic adviser for Oxfam and a professor in practice in international development at the London School of Economics.

Positive deviance emphasizes the benefit of identifying and amplifying these “deviant behaviors”, as they hold the potential to generate scalable solutions that can benefit the entire community.

It can be broken down into three steps: First, identifying high-performing individuals or groups within a challenging context. Next, an investigative process in the community via in-person interviews, group discussions, and questionnaires to find what their behaviors and practices are. Finally, it means encouraging solutions to be spread throughout the community.

In the final stage, the approach relies on community-generated solutions — which Green explains are more likely to propagate and be engaged with…(More)”.

Advancing Environmental Justice with AI


Article by Justina Nixon-Saintil: “Given its capacity to innovate climate solutions, the technology sector could provide the tools we need to understand, mitigate, and even reverse the damaging effects of global warming. In fact, addressing longstanding environmental injustices requires these companies to put the newest and most effective technologies into the hands of those on the front lines of the climate crisis.

Tools that harness the power of artificial intelligence, in particular, could offer unprecedented access to accurate information and prediction, enabling communities to learn from and adapt to climate challenges in real time. The IBM Sustainability Accelerator, which we launched in 2022, is at the forefront of this effort, supporting the development and scaling of projects such as the Deltares Aquality App, an AI-powered tool that helps farmers assess and improve water quality. As a result, farmers can grow crops more sustainably, prevent runoff pollution, and protect biodiversity.

Consider also the challenges that smallholder farmers face, such as rising costs, the difficulty of competing with larger producers that have better tools and technology, and, of course, the devastating effects of climate change on biodiversity and weather patterns. Accurate information, especially about soil conditions and water availability, can help them address these issues, but has historically been hard to obtain…(More)”.

Unlocking the value of supply chain data across industries


MIT Technology Review Insights: “The product shortages and supply-chain delays of the global covid-19 pandemic are still fresh memories. Consumers and industry are concerned that the next geopolitical climate event may have a similar impact. Against a backdrop of evolving regulations, these conditions mean manufacturers want to be prepared against short supplies, concerned customers, and weakened margins.

For supply chain professionals, achieving a “phygital” information flow—the blending of physical and digital data—is key to unlocking resilience and efficiency. As physical objects travel through supply chains, they generate a rich flow of data about the item and its journey—from its raw materials, its manufacturing conditions, even its expiration date—bringing new visibility and pinpointing bottlenecks.

This phygital information flow offers significant advantages, enhancing the ability to create rich customer experiences to satisfying environmental, social, and corporate governance (ESG) goals. In a 2022 EY global survey of executives, 70% of respondents agreed that a sustainable supply chain will increase their company’s revenue.

For disparate parties to exchange product information effectively, they require a common framework and universally understood language. Among supply chain players, data standards create a shared foundation. Standards help uniquely identify, accurately capture, and automatically share critical information about products, locations, and assets across trading communities…(More)”.

AI and new standards promise to make scientific data more useful by making it reusable and accessible


Article by Bradley Wade Bishop: “…AI makes it highly desirable for any data to be machine-actionable – that is, usable by machines without human intervention. Now, scholars can consider machines not only as tools but also as potential autonomous data reusers and collaborators.

The key to machine-actionable data is metadata. Metadata are the descriptions scientists set for their data and may include elements such as creator, date, coverage and subject. Minimal metadata is minimally useful, but correct and complete standardized metadata makes data more useful for both people and machines.

It takes a cadre of research data managers and librarians to make machine-actionable data a reality. These information professionals work to facilitate communication between scientists and systems by ensuring the quality, completeness and consistency of shared data.

The FAIR data principles, created by a group of researchers called FORCE11 in 2016 and used across the world, provide guidance on how to enable data reuse by machines and humans. FAIR data is findable, accessible, interoperable and reusable – meaning it has robust and complete metadata.

In the past, I’ve studied how scientists discover and reuse data. I found that scientists tend to use mental shortcuts when they’re looking for data – for example, they may go back to familiar and trusted sources or search for certain key terms they’ve used before. Ideally, my team could build this decision-making process of experts and remove as many biases as possible to improve AI. The automation of these mental shortcuts should reduce the time-consuming chore of locating the right data…(More)”.

Toward a 21st Century National Data Infrastructure: Enhancing Survey Programs by Using Multiple Data Sources


Report by National Academies of Sciences, Engineering, and Medicine: “Much of the statistical information currently produced by federal statistical agencies – information about economic, social, and physical well-being that is essential for the functioning of modern society – comes from sample surveys. In recent years, there has been a proliferation of data from other sources, including data collected by government agencies while administering programs, satellite and sensor data, private-sector data such as electronic health records and credit card transaction data, and massive amounts of data available on the internet. How can these data sources be used to enhance the information currently collected on surveys, and to provide new frontiers for producing information and statistics to benefit American society?…(More)”.

How Will the State Think With the Assistance of ChatGPT? The Case of Customs as an Example of Generative Artificial Intelligence in Public Administrations


Paper by Thomas Cantens: “…discusses the implications of Generative Artificial Intelligence (GAI) in public administrations and the specific questions it raises compared to specialized and « numerical » AI, based on the example of Customs and the experience of the World Customs Organization in the field of AI and data strategy implementation in Member countries.

At the organizational level, the advantages of GAI include cost reduction through internalization of tasks, uniformity and correctness of administrative language, access to broad knowledge, and potential paradigm shifts in fraud detection. At this level, the paper highlights three facts that distinguish GAI from specialized AI : i) GAI is less associated to decision-making process than specialized AI in public administrations so far, ii) the risks usually associated with GAI are often similar to those previously associated with specialized AI, but, while certain risks remain pertinent, others lose significance due to the constraints imposed by the inherent limitations of GAI technology itself when implemented in public administrations, iii) training data corpus for GAI becomes a strategic asset for public administrations, maybe more than the algorithms themselves, which was not the case for specialized AI.

At the individual level, the paper emphasizes the “language-centric” nature of GAI in contrast to “number-centric” AI systems implemented within public administrations up until now. It discusses the risks of replacement or enslavement of civil servants to the machines by exploring the transformative impact of GAI on the intellectual production of the State. The paper pleads for the development of critical vigilance and critical thinking as specific skills for civil servants who are highly specialized and will have to think with the assistance of a machine that is eclectic by nature…(More)”.