Design Thinking as a Strategic Approach to E-Participation


Book by Ilaria Mariani et al: “This open access book examines how the adoption of Design Thinking (DT) can support public organisations in overcoming some of the current barriers in e-participation. Scholars have discussed the adoption of technology to strengthen public engagement through e-participation, streamline and enhance the relationship between government and society, and improve accessibility and effectiveness. However, barriers persist, necessitating further research in this area. By analysing e-participation barriers emerging from the literature and aligning them with notions in the DT literature, this book identifies five core DT practices to enhance e-participation: (i) Meaning creation and sense-making, (ii) Publics formation, (iii) Co-production, (iv) Experimentation and prototyping, and (v) Changing organisational culture. As a result, this book provides insights into enhancing tech-aided public engagement and promoting inclusivity for translating citizen input into tangible service implementations. The book triangulates qualitative analysis of relevant literature in the fields of e-participation and DT with knowledge from European projects experimenting with public participation activities implying experimentation with digital tools. This research aims to bridge the gap between theoretical frameworks and practical application, ultimately contributing to more effective e-participation and digital public services…(More)”.

Proactive Mapping to Manage Disaster


Article by Andrew Mambondiyani: “..In March 2019, Cyclone Idai ravaged Zimbabwe, killing hundreds of people and leaving a trail of destruction. The Global INFORM Risk Index data shows that Zimbabwe is highly vulnerable to extreme climate-related events like floods, cyclones, and droughts, which in turn destroy infrastructure, displace people, and result in loss of lives and livelihoods.

Severe weather events like Idai have exposed the shortcomings of Zimbabwe’s traditional disaster-management system, which was devised to respond to environmental disasters by providing relief and rehabilitation of infrastructure and communities. After Idai, a team of climate-change researchers from three Zimbabwean universities and the local NGO DanChurchAid (DCA) concluded that the nation must adopt a more proactive approach by establishing an early-warning system to better prepare for and thereby prevent significant damage and death from such disasters.

In response to these findings, the Open Mapping Hub—Eastern and Southern Africa (ESA Hub)—launched a program in 2022 to develop an anticipatory-response approach in Zimbabwe. The ESA Hub is a regional NGO based in Kenya created by the Humanitarian OpenStreetMap Team (HOT), an international nonprofit that uses open-mapping technology to reduce environmental disaster risk. One of HOT’s four global hubs and its first in Africa, the ESA Hub was created in 2021 to facilitate the aggregation, utilization, and dissemination of high-quality open-mapping data across 23 countries in Eastern and Southern Africa. Open-source expert Monica Nthiga leads the hub’s team of 13 experts in mapping, open data, and digital content. The team collaborates with community-based organizations, humanitarian organizations, governments, and UN agencies to meet their specific mapping needs to best anticipate future climate-related disasters.

“The ESA Hub’s [anticipatory-response] project demonstrates how preemptive mapping can enhance disaster preparedness and resilience planning,” says Wilson Munyaradzi, disaster-services manager at the ESA Hub.

Open-mapping tools and workflows enable the hub to collect geospatial data to be stored, edited, and reviewed for quality assurance prior to being shared with its partners. “Geospatial data has the potential to identify key features of the landscape that can help plan and prepare before disasters occur so that mitigation methods are put in place to protect lives and livelihoods,” Munyaradzi says…(More)”.

Navigating Generative AI in Government


Report by the IBM Center for The Business of Government: “Generative AI refers to algorithms that can create realistic content such as images, text, music, and videos by learning from existing data patterns. Generative AI does more than just create content, it also serves as a user-friendly interface for other AI tools, making complex results easy to understand and use. Generative AI transforms analysis and prediction results into personalized formats, improving explainability by converting complicated data into understandable content. As Generative AI evolves, it plays an active role in collaborative processes, functioning as a vital collaborator by offering strengths that complement human abilities.

Generative AI has the potential to revolutionize government agencies by enhancing efficiency, improving decision making, and delivering better services to citizens, while maintaining agility and scalability. However, in order to implement generative AI solutions effectively, government agencies must address key questions—such as what problems AI can solve, data governance frameworks, and scaling strategies, to ensure a thoughtful and effective AI strategy. By exploring generic use cases, agencies can better understand the transformative potential of generative AI and align it with their unique needs and ethical considerations.

This report, which distills perspectives from two expert roundtable of leaders in Australia, presents 11 strategic pathways for integrating generative AI in government. The strategies include ensuring coherent and ethical AI implementation, developing adaptive AI governance models, investing in a robust data infrastructure, and providing comprehensive training for employees. Encouraging innovation and prioritizing public engagement and transparency are also essential to harnessing the full potential of AI…(More)”

The Emerging Age of AI Diplomacy


Article by Sam Winter-Levy: “In a vast conference room, below chandeliers and flashing lights, dozens of dancers waved fluorescent bars in an intricately choreographed routine. Green Matrix code rained down in the background on a screen that displayed skyscrapers soaring from a desert landscape. The world was witnessing the emergence of “a sublime and transcendent entity,” a narrator declared: artificial intelligence. As if to highlight AI’s transformative potential, a digital avatar—Artificial Superintelligence One—approached a young boy and together they began to sing John Lennon’s “Imagine.” The audience applauded enthusiastically. With that, the final day dawned on what one government minister in attendance described as the “world’s largest AI thought leadership event.”

This surreal display took place not in Palo Alto or Menlo Park but in Riyadh, Saudi Arabia, at the third edition of the city’s Global AI Summit, in September of this year. In a cavernous exhibition center next to the Ritz Carlton, where Crown Prince Mohammed bin Salman imprisoned hundreds of wealthy Saudis on charges of corruption in 2017,robots poured tea and mixed drinks. Officials in ankle-length white robes hailed Saudi Arabia’s progress on AI. American and Chinese technology companies pitched their products and announced memorandums of understanding with the government. Attendantsdistributed stickers that declared, “Data is the new oil.”

For Saudi Arabia and its neighbor, the United Arab Emirates (UAE), AI plays an increasingly central role in their attempts to transform their oil wealth into new economic models before the world transitions away from fossil fuels. For American AI companies, hungry for capital and energy, the two Gulf states and their sovereign wealth funds are tantalizing partners. And some policymakers in Washington see a once-in-a-generation opportunity to promise access to American computing power in a bid to lure the Gulf states away from China and deepen an anti-Iranian coalition in the Middle East….The two Gulf states’ interest in AI is not new, but it has intensified in recent months. Saudi Arabia plans to create a $40 billion fund to invest in AI and has set up Silicon Valley–inspired startup accelerators to entice coders to Riyadh. In 2019, the UAE launched the world’s first university dedicated to AI, and since 2021, the number of AI workers in the country has quadrupled, according to government figures. The UAE has also released a series of open-source large language models that it claims rival those of Google and Meta, and earlier this year it launched an investment firm focused on AI and semiconductors that could surpass $100 billion in assets under management…(More)”.

When combinations of humans and AI are useful: A systematic review and meta-analysis


Paper by Michelle Vaccaro, Abdullah Almaatouq & Thomas Malone: “Inspired by the increasing use of artificial intelligence (AI) to augment humans, researchers have studied human–AI systems involving different tasks, systems and populations. Despite such a large body of work, we lack a broad conceptual understanding of when combinations of humans and AI are better than either alone. Here we addressed this question by conducting a preregistered systematic review and meta-analysis of 106 experimental studies reporting 370 effect sizes. We searched an interdisciplinary set of databases (the Association for Computing Machinery Digital Library, the Web of Science and the Association for Information Systems eLibrary) for studies published between 1 January 2020 and 30 June 2023. Each study was required to include an original human-participants experiment that evaluated the performance of humans alone, AI alone and human–AI combinations. First, we found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone (Hedges’ g = −0.23; 95% confidence interval, −0.39 to −0.07). Second, we found performance losses in tasks that involved making decisions and significantly greater gains in tasks that involved creating content. Finally, when humans outperformed AI alone, we found performance gains in the combination, but when AI outperformed humans alone, we found losses. Limitations of the evidence assessed here include possible publication bias and variations in the study designs analysed. Overall, these findings highlight the heterogeneity of the effects of human–AI collaboration and point to promising avenues for improving human–AI systems…(More)”.

What’s the Value of Privacy?


Brief by New America: “On a day-to-day basis, people make decisions about what information to share and what information to keep to themselves—guided by an inner privacy compass. Privacy is a concept that is both evocative and broad, often possessing different meanings for different people. The term eludes a commonstatic definition, though it is now inextricably linked to technology and a growing sense that individuals do not have control over their personal information. If privacy still, at its core, encompasses “the right to be left alone,” then that right is increasingly difficult to exercise in the modern era. 

The inability to meaningfully choose privacy is not an accident—in fact, it’s often by design. Society runs on data. Whether it is data about people’s personal attributespreferences, or actions, all that data can be linked together, becoming greater than the sum of its parts. If data is now the world’s most valuable resource, then the companies that are making record profits off that data are highly incentivized to keep accessing it and obfuscating the externalities of data sharing. In brief, data use and privacy are “economically significant.” 

And yet, despite the pervasive nature of data collection, much of the public lacks a nuanced understanding of the true costs and benefits of sharing their data—for themselves and for society as a whole. People who have made billions by collecting and re-selling individual user data will continue to claim that it has little value. And yet, there are legitimate reasons why data should be shared—without a clear understanding of an issue, it is impossible to address it…(More)”.

New data laws unveiled to improve public services and boost UK economy by £10 billion


(UK) Press Release: “A new Bill which will harness the enormous power of data to boost the UK economy by £10 billion, and free up millions of police and NHS staff hours has been introduced to Parliament today (Wednesday 23rd October).

The Data Use and Access Bill will unlock the secure and effective use of data for the public interest, without adding pressures to the country’s finances. The measures will be central to delivering three of the five Missions to rebuild Britain, set out by the Prime Minister:

  • kickstarting economic growth
  • taking back our streets
  • and building an NHS fit for the future

Some of its key measures include cutting down on bureaucracy for our police officers, so that they can focus on tackling crime rather than being bogged down by admin, freeing up 1.5 million hours of their time a year. It will also make patients’ data easily transferable across the NHS so that frontline staff can make better informed decisions for patients more quickly, freeing up 140,000 hours of NHS staff time every year, speeding up care and improving patients’ health outcomes.

The better use of data under measures in the Bill will also simplify important tasks such as renting a flat and starting work with trusted ways to verify your identity online, or enabling electronic registration of births and deaths, so that people and businesses can get on with their lives without unnecessary admin.

Vital safeguards will remain in place to track and monitor how personal data is used, giving peace of mind to patients and victims of crime. IT systems in the NHS operate to the highest standards of security and all organisations have governance arrangements in place to ensure the safe, legal storage and use of data…(More)”

Make it make sense: the challenge of data analysis in global deliberation


Blog by Iñaki Goñi: “From climate change to emerging technologies to economic justice to space, global and transnational deliberation is on the rise. Global deliberative processes aim to bring citizen-centred governance to issues that no single nation can resolve alone. Running deliberative processes at this scale poses a unique set of challenges. How to select participants, make the forums accountableimpactfulfairly designed, and aware of power imbalances, are all crucial and open questions….

Massifying participation will be key to invigorating global deliberation. Assemblies will have a better chance of being seen as legitimate, fair, and publicly supported if they involve thousands or even millions of diverse participants. This raises an operational challenge: how to systematise political ideas from many people across the globe.

In a centralised global assembly, anything from 50 to 500 citizens from various countries engage in a single deliberation and produce recommendations or political actions by crossing languages and cultures. In a distributed assembly, multiple gatherings are convened locally that share a common but flexible methodology, allowing participants to discuss a common issue applied both to local and global contexts. Either way, a global deliberation process demands the organisation and synthesis of possibly thousands of ideas from diverse languages and cultures around the world.

How could we ever make sense of all that data to systematise citizens’ ideas and recommendations? Most people turn their heads to computational methods to help reduce complexity and identify patterns. First up, one technique for analysing text amounts to little more than simple counting, through which we can produce something like a frequency table or a wordcloud…(More)”.

Open government data and self-efficacy: The empirical evidence of micro foundation via survey experiments


Paper by Kuang-Ting Tai, Pallavi Awasthi, and Ivan P. Lee: “Research on the potential impacts of government openness and open government data is not new. However, empirical evidence regarding the micro-level impact, which can validate macro-level theories, has been particularly limited. Grounded in social cognitive theory, this study contributes to the literature by empirically examining how the dissemination of government information in an open data format can influence individuals’ perceptions of self-efficacy, a key predictor of public participation. Based on two rounds of online survey experiments conducted in the U.S., the findings reveal that exposure to open government data is associated with decreased perceived self-efficacy, resulting in lower confidence in participating in public affairs. This result, while contrary to optimistic assumptions, aligns with some other empirical studies and highlights the need to reconsider the format for disseminating government information. The policy implications suggest further calibration of open data applications to target professional and skilled individuals. This study underscores the importance of experiment replication and theory development as key components of future research agendas…(More)”.

Nature-rich nations push for biodata payout


Article by Lee Harris: “Before the current generation of weight-loss drugs, there was hoodia, a cactus that grows in southern Africa’s Kalahari Desert, and which members of the region’s San tribe have long used to stave off hunger. UK-based Phytopharm licensed the active ingredient in the cactus in 1996, and made numerous attempts to commercialise weight-loss products derived from it.

The company won licensing deals with Pfizer and Unilever, but drew outrage from campaigners who argued that the country was ripping off indigenous groups that had made the discovery. Indignation grew after the chief executive said it could not compensate local tribes because “the people who discovered the plant have disappeared”. (They had not).

This is just one example of companies using biological resources discovered in other countries for financial gain. The UN has attempted to set fairer terms with treaties such as the 1992 Convention on Biological Diversity, which deals with the sharing of genetic resources. But this approach has been seen by many developing countries as unsatisfactory. And earlier tools governing trade in plants and microbes may become less useful as biological data is now frequently transmitted in the form of so-called digital sequence information — the genetic code derived from those physical resources.

Now, the UN is working on a fund to pay stewards of biodiversity — notably communities in lower-income countries — for discoveries made with genetic data from their ecosystems. The mechanism was established in 2022 as part of the Conference of Parties to the UN Convention on Biological Diversity, a sister process to the climate “COP” initiative. But the question of how it will be governed and funded will be on the table at the October COP16 summit in Cali, Colombia.

If such a fund comes to fruition — a big “if” — it could raise billions for biodiversity goals. The sectors that depend on this genetic data — notably, pharmaceuticals, biotech and agribusiness — generate revenues exceeding $1tn annually, and African countries plan to push for these sectors to contribute 1 per cent of all global retail sales to the fund, according to Bloomberg.

There’s reason to temper expectations, however. Such a fund would lack the power to compel national governments or industries to pay up. Instead, the strategy is focused around raising ambition — and public pressure — for key industries to make voluntary contributions…(More)”.