Designing Digital Voting Systems for Citizens


Paper by Joshua C. Yang et al: “Participatory Budgeting (PB) has evolved into a key democratic instrument for resource allocation in cities. Enabled by digital platforms, cities now have the opportunity to let citizens directly propose and vote on urban projects, using different voting input and aggregation rules. However, the choices cities make in terms of the rules of their PB have often not been informed by academic studies on voter behaviour and preferences. Therefore, this work presents the results of behavioural experiments where participants were asked to vote in a fictional PB setting. We identified approaches to designing PB voting that minimise cognitive load and enhance the perceived fairness and legitimacy of the digital process from the citizens’ perspective. In our study, participants preferred voting input formats that are more expressive (like rankings and distributing points) over simpler formats (like approval voting). Participants also indicated a desire for the budget to be fairly distributed across city districts and project categories. Participants found the Method of Equal Shares voting rule to be fairer than the conventional Greedy voting rule. These findings offer actionable insights for digital governance, contributing to the development of fairer and more transparent digital systems and collective decision-making processes for citizens…(More)”.

AI Accountability Policy Report


Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….


The AI Accountability Policy Report
 conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.

Graphic showing the AI Accountability Chain model

A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

How Public Polling Has Changed in the 21st Century


Report by Pew Research: “The 2016 and 2020 presidential elections left many Americans wondering whether polling was broken and what, if anything, pollsters might do about it. A new Pew Research Center study finds that most national pollsters have changed their approach since 2016, and in some cases dramatically. Most (61%) of the pollsters who conducted and publicly released national surveys in both 2016 and 2022 used methods in 2022 that differed from what they used in 2016. The study also finds the use of multiple methods increasing. Last year 17% of national pollsters used at least three different methods to sample or interview people (sometimes in the same survey), up from 2% in 2016….(More)”.

The Non-Coherence Theory of Digital Human Rights


Book by Mart Susi: “…offers a novel non-coherence theory of digital human rights to explain the change in meaning and scope of human rights rules, principles, ideas and concepts, and the interrelationships and related actors, when moving from the physical domain into the online domain. The transposition into the digital reality can alter the meaning of well-established offline human rights to a wider or narrower extent, impacting core concepts such as transparency, legal certainty and foreseeability. Susi analyses the ‘loss in transposition’ of some core features of the rights to privacy and freedom of expression. The non-coherence theory is used to explore key human rights theoretical concepts, such as the network society approach, the capabilities approach, transversality, and self-normativity, and it is also applied to e-state and artificial intelligence, challenging the idea of the sameness of rights…(More)”.

The Need for Climate Data Stewardship: 10 Tensions and Reflections regarding Climate Data Governance


Paper by Stefaan Verhulst: “Datafication — the increase in data generation and advancements in data analysis — offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing new data sources in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates ten core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues…(More)”.

Meta Kills a Crucial Transparency Tool At the Worst Possible Time


Interview by Vittoria Elliott: “Earlier this month, Meta announced that it would be shutting down CrowdTangle, the social media monitoring and transparency tool that has allowed journalists and researchers to track the spread of mis- and disinformation. It will cease to function on August 14, 2024—just months before the US presidential election.

Meta’s move is just the latest example of a tech company rolling back transparency and security measures as the world enters the biggest global election year in history. The company says it is replacing CrowdTangle with a new Content Library API, which will require researchers and nonprofits to apply for access to the company’s data. But the Mozilla Foundation and 140 other civil society organizations protested last week that the new offering lacks much of CrowdTangle’s functionality, asking the company to keep the original tool operating until January 2025.

Meta spokesperson Andy Stone countered in posts on X that the groups’ claims “are just wrong,” saying the new Content Library will contain “more comprehensive data than CrowdTangle” and be made available to nonprofits, academics, and election integrity experts. When asked why commercial newsrooms, like WIRED, are to be excluded from the Content Library, Meta spokesperson Eric Porterfield said,  that it was “built for research purposes.” While journalists might not have direct access he suggested they could use commercial social network analysis tools, or “partner with an academic institution to help answer a research question related to our platforms.”

Brandon Silverman, cofounder and former CEO of CrowdTangle, who continued to work on the tool after Facebook acquired it in 2016, says it’s time to force platforms to open up their data to outsiders. The conversation has been edited for length and clarity…(More)”.

This Chatbot Democratizes Data to Empower India’s Farmers


Article by Abha Malpani Naismith: “…The lack of access to market price information and reliance on intermediaries to sell on their behalf leaves farmers vulnerable to price exploitation and uncertain returns on their investments.

To solve this, Gramhal is building a data cooperative in India where farmers contribute their information to a data ecosystem, which all farmers can leverage for better informed decision-making…

The social enterprise started the project to democratize data first by using the Indian government’s collected data sets from markets and crops across the country. It then built a chatbot (called Bolbhav) and plugged in that data. Soon about 300,000 farmers were accessing this data set via the chatbot on their mobile phones. 

“We spent no money on marketing — this was all just from word of mouth!” Kaleem said. 

gramhal chatbot provides market data for small farmers in India
Gramhal’s Bolbhav chatbot provides farmers with market data so they know how to fairly price their crops. 

However, Gramhal started getting feedback from farmers that the chatbot was giving them prices three days old and what they wanted was real-time, reliable data. “That is when we realized that we need to work with the power of community and think about a societal network framework where every farmer who is selling can contribute to the data and have access to it,” Kaleem explained. “We needed to find a way where the farmer can send price information about what they are selling by uploading their receipts, and we can aggregate that data across markets and share it with them.”

The solution was an upgraded version of the chatbot called Bolbhav Plus, which Gramhal launched in April 2023…(More)”

AI Is Building Highly Effective Antibodies That Humans Can’t Even Imagine


Article by Amit Katwala: “Robots, computers, and algorithms are hunting for potential new therapies in ways humans can’t—by processing huge volumes of data and building previously unimagined molecules. At an old biscuit factory in South London, giant mixers and industrial ovens have been replaced by robotic arms, incubators, and DNA sequencing machines.

James Field and his company LabGenius aren’t making sweet treats; they’re cooking up a revolutionary, AI-powered approach to engineering new medical antibodies. In nature, antibodies are the body’s response to disease and serve as the immune system’s front-line troops. They’re strands of protein that are specially shaped to stick to foreign invaders so that they can be flushed from the system. Since the 1980s, pharmaceutical companies have been making synthetic antibodies to treat diseases like cancer, and to reduce the chance of transplanted organs being rejected. But designing these antibodies is a slow process for humans—protein designers must wade through the millions of potential combinations of amino acids to find the ones that will fold together in exactly the right way, and then test them all experimentally, tweaking some variables to improve some characteristics of the treatment while hoping that doesn’t make it worse in other ways. “If you want to create a new therapeutic antibody, somewhere in this infinite space of potential molecules sits the molecule you want to find,” says Field, the founder and CEO of LabGenius…(More)”.

Open Government Products (OGP)


About: “We are an experimental development team that builds technology for the public good. This includes everything from building better apps for citizens to automating the internal operations of public agencies. Our role is to accelerate the digital transformation of the Singapore Government by being a space where it can experiment with new tech practices, including new technologies, management techniques, corporate systems, and even cultural norms. Our end goal is that through our work, Singapore becomes a model of how governments can use technology to improve the public good…(More)”.