The Power of Supercitizens


Blog by Brian Klaas: “Lurking among us, there are a group of hidden heroes, people who routinely devote significant amounts of their time, energy, and talent to making our communities better. These are the devoted, do-gooding, elite one percent. Most, but not all, are volunteers.1 All are selfless altruists. They, the supercitizens, provide some of the stickiness in the social glue that holds us together.2

What if I told you that there’s this little trick you can do that makes your community stronger, helps other people, and makes you happier and live longer? Well, it exists, there’s ample evidence it works, and best of all, it’s free.

Recently published research showcases a convincing causal link between these supercitizens—devoted, regular volunteers—and social cohesion. While such an umbrella term means a million different things, these researchers focused on two UK-based surveys that analyzed three facets of social cohesion, measured through eight questions (respondents answered on a five point scale, ranging from strongly disagree to strongly agree). They were:


Neighboring

  • ‘If I needed advice about something I could go to someone in my neighborhood’;
  • ‘I borrow things and exchange favors with my neighbors’; and
  • ‘I regularly stop and talk with people in my neighborhood’

Psychological sense of community

  • ‘I feel like I belong to this neighborhood’;
  • ‘The friendships and associations I have with other people in my neighborhood mean a lot to me’;
  • ‘I would be willing to work together with others on something to improve my neighborhood’; and
  • ‘I think of myself as similar to the people that live in this neighborhood’)

Attraction to the neighborhood

  • ‘I plan to remain a resident of this neighborhood for a number of years’

While these questions only tap into some specific components of social cohesion, high levels of these ingredients are likely to produce a reliable recipe for a healthy local community. (Social cohesion differs from social capital, popularized by Robert Putnam and his book, Bowling Alone. Social capital tends to focus on links between individuals and groups—are you a joiner or more of a loner?—whereas cohesion refers to a more diffuse sense of community, belonging, and neighborliness)…(More)”.

The Power of Volunteers: Remote Mapping Gaza and Strategies in Conflict Areas


Blog by Jessica Pechmann: “…In Gaza, increased conflict since October 2023 has caused a prolonged humanitarian crisis. Understanding the impact of the conflict on buildings has been challenging, since pre-existing datasets from artificial intelligence and machine learning (AI/ML) models and OSM were not accurate enough to create a full building footprint baseline. The area’s buildings were too dense, and information on the ground was impossible to collect safely. In these hard-to-reach areas, HOT’s remote and crowdsourced mapping methodology was a good fit for collecting detailed information visible on aerial imagery.

In February 2024, after consultation with humanitarian and UN actors working in Gaza, HOT decided to create a pre-conflict dataset of all building footprints in the area in OSM. HOT’s community of OpenStreetMap volunteers did all the data work, coordinating through HOT’s Tasking Manager. The volunteers made meticulous edits to add missing data and to improve existing data. Due to protection and data quality concerns, only expert volunteer teams were assigned to map and validate the area. As in other areas that are hard to reach due to conflict, HOT balanced the data needs with responsible data practices based on the context.

Comparing AI/ML with human-verified OSM building datasets in conflict zones

AI/ML is becoming an increasingly common and quick way to obtain building footprints across large areas. Sources for automated building footprints range from worldwide datasets by Microsoft or Google to smaller-scale open community-managed tools such as HOT’s new application, fAIr.

Now that HOT volunteers have completely updated and validated all OSM buildings in visible imagery pre-conflict, OSM has 18% more individual buildings in the Gaza strip than Microsoft’s ML buildings dataset (estimated 330,079 buildings vs 280,112 buildings). However, in contexts where there has not been a coordinated update effort in OSM, the numbers may differ. For example, in Sudan where there has not been a large organized editing campaign, there are just under 1,500,000 in OSM, compared to over 5,820,000 buildings in Microsoft’s ML data. It is important to note that the ML datasets have not been human-verified and their accuracy is not known. Google Open Buildings has over 26 million building features in Sudan, but on visual inspection, many of these features are noise in the data that the model incorrectly identified as buildings in the uninhabited desert…(More)”.

Under which conditions can civic monitoring be admitted as a source of evidence in courts?


Blog by Anna Berti Suman: “The ‘Sensing for Justice’ (SensJus) research project – running between 2020 and 2023 – explored how people use monitoring technologies or just their senses to gather evidence of environmental issues and claim environmental justice in a variety of fora. Among the other research lines, we looked at successful and failed cases of civic-gathered data introduced in courts. The guiding question was: what are the enabling factors and/or barriers for the introduction of civic evidence in environmental litigation?

Civic environmental monitoring is the use by ordinary people of monitoring devices (e.g., a sensor) or their bare senses (e.g., smell, hearing) to detect environmental issues. It can be regarded as a form of reaction to environmental injustices, a form of political contestation through data and even as a form of collective care. The practice is fast growing, especially thanks to the widespread availability of audio and video-recording devices in the hand of diverse publics, but also due to the increase in public literacy and concern on environmental matters.

Civic monitoring can be a powerful source of evidence for law enforcement, especially when it sheds light on official informational gaps associated with the shortages of public agencies’ resources to detect environmental wrongdoings. Both legal scholars and practitioners as well as civil society organizations and institutional actors should look at the practice and its potential applications with attention.

Among the cases explored for the SensJus project, the Formosa case, Texas, United States, stands out as it sets a key precedent: issued in June 2019, the landmark ruling found a Taiwanese petrochemical company liable for violating the US Clean Water Act, mostly on the basis of citizen-collected evidence involving volunteer observations of plastic contamination over years. The contamination could not be proven through existing data held by competent authorities because the company never filed any record of pollution. Our analysis of the case highlights some key determinants of the case’s success…(More)”.

Future-proofing government data


Article by Amy Jones: “Vast amounts of data are fueling innovation and decision-making, and agencies representing the United States government are custodian to some of the largest repositories of data in the world. As one of the world’s largest data creators and consumers, the federal government has made substantial investments in sourcing, curating, and leveraging data across many domains. However, the increasing reliance on artificial intelligence to extract insights and drive efficiencies necessitates a strategic pivot: agencies must evolve data management practices to identify and discriminate synthetic data from organic sources to safeguard the integrity and utility of data assets.

AI’s transformative potential is contingent on the availability of high-quality data. Data readiness includes attention to quality, accuracy, completeness, consistency, timeliness and relevance, at a minimum, and agencies are adopting robust data governance frameworks that enforce data quality standards at every stage of the data lifecycle. This includes implementing advanced data validation techniques, fostering a culture of data stewardship, and leveraging state-of-the-art tools for continuous data quality monitoring…(More)”.

Rethinking Dual-Use Technology


Article by Artur Kluz and Stefaan Verhulst: “A new concept of “triple use” — where technology serves commercial, defense, and peacebuilding purposes — may offer a breakthrough solution for founders, investors and society to explore….

As a result of the resurgence of geopolitical tensions, the debate about the applications of dual-use technology is intensifying. The core issue founders, tech entrepreneurs, venture capitalists (VCs), and limited partner investors (LPs) are examining is whether commercial technologies should increasingly be re-used for military purposes. Traditionally, the majority of  investors (including limited partners) have prohibited dual-use tech in their agreements. However, the rapidly growing dual-use market, with its substantial addressable size and growth potential, is compelling all stakeholders to reconsider this stance. The pressure for innovations, capital returns and Return On Investment (ROI) is driving the need for a solution. 

These discussions are fraught with moral complexity, but they also present an opportunity to rethink the dual-use paradigm and foster investment in technologies aimed at supporting peace. A new concept of “triple use”— where technology serves commercial, defense, and peacebuilding purposes — may offer an innovative and more positive avenue for founders, investors and society to explore. This additional re-use, which remains in an incipient state, is increasingly being referred to as PeaceTech. By integrating terms dedicated to PeaceTech in new and existing investment and LP agreements, tech companies, founders and venture capital investors can be also required to apply their technology for peacebuilding purposes. This approach can expand the applications of emerging technologies to also include conflict prevention, reconstruction or any humanitarian aspects.

However, current efforts to use technologies for peacebuilding are impeded by various obstacles, including a lack of awareness within the tech sector and among investors, limited commercial interest, disparities in technical capacity, privacy concerns, international relations and political complexities. In the below we examine some of these challenges, while also exploring certain avenues for overcoming them — including approaching technologies for peace as a “triple use” application. We will especially try to identify examples of how tech companies, tech entrepreneurs, accelerators, and tech investors including VCs and LPs can commercially benefit and support “triple use” technologies. Ultimately, we argue, the vast potential — largely untapped — of “triple use” technologies calls for a new wave of tech ecosystem transformation and public and private investments as well as the development of a new field of research…(More)”.

Training LLMs to Draft Replies to Parliamentary Questions


Blog by Watson Chua: “In Singapore, the government is answerable to Parliament and Members of Parliament (MPs) may raise queries to any Minister on any matter in his portfolio. These questions can be answered orally during the Parliament sitting or through a written reply. Regardless of the medium, public servants in the ministries must gather materials to answer the question and prepare a response.

Generative AI and Large Language Models (LLMs) have already been applied to help public servants do this more effectively and efficiently. For example, Pair Search (publicly accessible) and the Hansard Analysis Tool (only accessible to public servants) help public servants search for relevant information in past Parliamentary Sittings relevant to the question and synthesise a response to it.

The existing systems draft the responses using prompt engineering and Retrieval Augmented Generation (RAG). To recap, RAG consists of two main parts:

  • Retriever: A search engine that finds documents relevant to the question
  • Generator: A text generation model (LLM) that takes in the instruction, the question, and the search results from the retriever to respond to the question
A typical RAG system. Illustration by Hrishi Olickel, taken from here.

Using a pre-trained instruction-tuned LLM like GPT-4o, the generator can usually generate a good response. However, it might not be exactly what is desired in terms of verbosity, style and writing prose, and additional human post-processing might be needed. Extensive prompt engineering or few-shot learning can be done to mold the response at the expense of incurring higher costs from using additional tokens in the prompt…(More)”

Increasing The “Policy Readiness” Of Ideas


Article by Tom Kalil: “NASA and the Defense Department have developed an analytical framework called the “technology readiness level” for assessing the maturity of a technology – from basic research to a technology that is ready to be deployed.  

policy entrepreneur (anyone with an idea for a policy solution that will drive positive change) needs to realize that it is also possible to increase the “policy readiness” level of an idea by taking steps to increase the chances that a policy idea is successful, if adopted and implemented.  Given that policy-makers are often time constrained, they are more likely to consider ideas where more thought has been given to the core questions that they may need to answer as part of the policy process.

A good first step is to ask questions about the policy landscape surrounding a particular idea:

1. What is a clear description of the problem or opportunity?  What is the case for policymakers to devote time, energy, and political capital to the problem?

2. Is there a credible rationale for government involvement or policy change?  

Economists have developed frameworks for both market failure (such as public goods, positive and negative externalities, information asymmetries, and monopolies) and government failure (such as regulatory capture, the role of interest groups in supporting policies that have concentrated benefits and diffuse costs, limited state capacity, and the inherent difficulty of aggregating timely, relevant information to make and implement policy decisions.)

3. Is there a root cause analysis of the problem? …(More)”.

AI: a transformative force in maternal healthcare


Article by Afifa Waheed: “Artificial intelligence (AI) and robotics have enormous potential in healthcare and are quickly shifting the landscape – emerging as a transformative force. They offer a new dimension to the way healthcare professionals approach disease diagnosis, treatment and monitoring. AI is being used in healthcare to help diagnose patients, for drug discovery and development, to improve physician-patient communication, to transcribe voluminous medical documents, and to analyse genomics and genetics. Labs are conducting research work faster than ever before, work that otherwise would have taken decades without the assistance of AI. AI-driven research in life sciences has included applications looking to address broad-based areas, such as diabetes, cancer, chronic kidney disease and maternal health.

In addition to increasing the knowledge of access to postnatal and neonatal care, AI can predict the risk of adverse events in antenatal and postnatal women and their neonatal care. It can be trained to identify those at risk of adverse events, by using patients’ health information such as nutrition status, age, existing health conditions and lifestyle factors. 

AI can further be used to improve access to women located in rural areas with a lack of trained professionals – AI-enabled ultrasound can assist front-line workers with image interpretation for a comprehensive set of obstetrics measurements, increasing quality access to early foetal ultrasound scans. The use of AI assistants and chatbots can also improve pregnant mothers’ experience by helping them find available physicians, schedule appointments and even answer some patient questions…

Many healthcare professionals I have spoken to emphasised that pre-existing conditions such as high blood pressure that leads to preeclampsia, iron deficiency, cardiovascular disease, age-related issues for those over 35, various other existing health conditions, and failure in the progress of labour that might lead to Caesarean (C-section), could all cause maternal deaths. Training AI models to detect these diseases early on and accurately for women could prove to be beneficial. AI algorithms can leverage advanced algorithms, machine learning (ML) techniques, and predictive models to enhance decision-making, optimise healthcare delivery, and ultimately improve patient outcomes in foeto-maternal health…(More)”.

How to build a Collective Mind that speaks for humanity in real-time


Blog by Louis Rosenberg: “This begs the question — could large human groups deliberate in real-time with the efficiency of fish schools and quickly reach optimized decisions?

For years this goal seemed impossible. That’s because conversational deliberations have been shown to be most productive in small groups of 4 to 7 people and quickly degrade as groups grow larger. This is because the “airtime per person” gets progressively squeezed and the wait-time to respond to others steadily increases. By 12 to 15 people, the conversational dynamics change from thoughtful debate to a series of monologues that become increasingly disjointed. By 20 people, the dialog ceases to be a conversation at all. This problem seemed impenetrable until recent advances in Generative AI opened up new solutions.

The resulting technology is called Conversational Swarm Intelligence and it promises to allow groups of almost any size (200, 2000, or even 2 million people) to discuss complex problems in real-time and quickly converge on solutions with significantly amplified intelligence. The first step is to divide the population into small subgroups, each sized for thoughtful dialog. For example, a 1000-person group could be divided into 200 subgroups of 5, each routed into their own chat room or video conferencing session. Of course, this does not create a single unified conversation — it creates 200 parallel conversations…(More)”.

Doing science backwards


Article by Stuart Ritchie: “…Usually, the process of publishing such a study would look like this: you run the study; you write it up as a paper; you submit it to a journal; the journal gets some other scientists to peer-review it; it gets published – or if it doesn’t, you either discard it, or send it off to a different journal and the whole process starts again.

That’s standard operating procedure. But it shouldn’t be. Think about the job of the peer-reviewer: when they start their work, they’re handed a full-fledged paper, reporting on a study and a statistical analysis that happened at some point in the past. It’s all now done and, if not fully dusted, then in a pretty final-looking form.

What can the reviewer do? They can check the analysis makes sense, sure; they can recommend new analyses are done; they can even, in extreme cases, make the original authors go off and collect some entirely new data in a further study – maybe the data the authors originally presented just aren’t convincing or don’t represent a proper test of the hypothesis.

Ronald Fisher described the study-first, review-later process in 1938:

To consult the statistician [or, in our case, peer-reviewer] after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.

Clearly this isn’t the optimal, most efficient way to do science. Why don’t we review the statistics and design of a study right at the beginning of the process, rather than at the end?

This is where Registered Reports come in. They’re a new (well, new-ish) way of publishing papers where, before you go to the lab, or wherever you’re collecting data, you write down your plan for your study and send it off for peer-review. The reviewers can then give you genuinely constructive criticism – you can literally construct your experiment differently depending on their suggestions. You build consensus—between you, the reviewers, and the journal editor—on the method of the study. And then, once everyone agrees on what a good study of this question would look like, you go off and do it. The key part is that, at this point, the journal agrees to publish your study, regardless of what the results might eventually look like…(More)”.