Democratic innovation and digital participation


Nesta Report: “Overcoming barriers in democratic innovations to harness the collective intelligence of citizens for a 21st-century democracy.

This report sets out the need for democratic innovations and digital participation tools to move beyond one-off pilots toward more embedded and inclusive systems of decision-making.

This is the first comprehensive analysis of the barriers experienced by democratic innovators around the world. Alongside the barriers, we have captured the enablers that can help advance these innovations and tools to their full potential.

The report is published alongside the advancing democratic innovation toolkit which supports institutions, practitioners and technologists to diagnose the barriers that they face and identify the enablers they can use to address them.

This report is based on insights from global examples of digital democratic innovation, and in particular, three pilots from the COLDIGIT project: a citizens’ assembly in Trondheim, Norway; participatory budgeting in Gothenburg, Sweden; and participatory budgeting in Helsinki, Finland.

The work is a collaboration between Nesta, Digidem Lab, University of Gothenburg, University of Helsinki and SINTEF funded by the Economic and Social Research Council (ESRC)….(More)”.

Rethinking Intelligence In A More-Than-Human World


Essay by Amanda Rees: “We spend a lot of time debating intelligence — what does it mean? Who has it? And especially lately — can technology help us create or enhance it?

But for a species that relies on its self-declared “wisdom” to differentiate itself from all other animals, a species that consistently defines itself as intelligent and rational, Homo sapiens tends to do some strikingly foolish things — creating the climate crisis, for example, or threatening the survival of our world with nuclear disaster, or creating ever-more-powerful and pervasive algorithms. 

If we are in fact to be “wise,” we need to learn to manage a range of different and potentially existential risks relating to (and often created by) our technological interventions in the bio-social ecologies we inhabit. We need, in short, to rethink what it means to be intelligent. 

Points Of Origin

Part of the problem is that we think of both “intelligence” and “agency” as objective, identifiable, measurable human characteristics. But they’re not. At least in part, both concepts are instead the product of specific historical circumstances. “Agency,” for example, emerges with the European Enlightenment, perhaps best encapsulated in Giovanni Pico della Mirandola’s “Oration on the Dignity of Man.” Writing in the late 15th century, Mirandola revels in the fact that to humanity alone “it is granted to have whatever he chooses, to be whatever he wills. … On man … the Father conferred the seeds of all kinds and the germs of every way of life. Whatever seeds each man cultivates will grow to maturity and bear in him their own fruit.”

In other words, what makes humans unique is their possession of the God-given capacity to exercise free will — to take rational, self-conscious action in order to achieve specific ends. Today, this remains the model of agency that underpins significant and influential areas of public discourse. It resonates strongly with neoliberalist reforms of economic policy, for example, as well as with debates on public health responsibility and welfare spending. 

A few hundred years later, the modern version of “intelligence” appears, again in Europe, where it came to be understood as a capacity for ordered, rational, problem-solving, pattern-recognizing cognition. Through the work of the eugenicist Francis Galton, among others, intelligence soon came to be regarded as an innate quality possessed by individuals to greater or lesser degree, which could be used to sort populations into hierarchies of social access and economic reward…(More)”.

Exhaustive or Exhausting? Evidence on Respondent Fatigue in Long Surveys


Paper by Dahyeon Jeong et al: “Living standards measurement surveys require sustained attention for several hours. We quantify survey fatigue by randomizing the order of questions in 2-3 hour-long in-person surveys. An additional hour of survey time increases the probability that a respondent skips a question by 10-64%. Because skips are more common, the total monetary value of aggregated categories such as assets or expenditures declines as the survey goes on, and this effect is sizeable for some categories: for example, an extra hour of survey time lowers food expenditures by 25%. We find similar effect sizes within phone surveys in which respondents were already familiar with questions, suggesting that cognitive burden may be a key driver of survey fatigue…(More)”.

Unlocking the Potential of Open 990 Data


Article by Cinthia Schuman Ottinger & Jeff Williams: “As the movement to expand public use of nonprofit data collected by the Internal Revenue Service advances, it’s a good time to review how far the social sector has come and how much work remains to reach the full potential of this treasure trove…Organizations have employed open Form 990 data in numerous ways, including to:

  • Create new tools for donors.For instance, the Nonprofit Aid Visualizer, a partnership between Candid and Vanguard Charitable, uses open 990 data to find communities vulnerable to COVID-19, and help address both their immediate needs and long-term recovery. Another tool, COVID-19 Urgent Service Provider Support Tool, developed by the consulting firm BCT Partners, uses 990 data to direct donors to service providers that are close to communities most affected by COVID-19.
  • More efficiently prosecute charitable fraud. This includes a campaign by the New York Attorney General’s Office that recovered $1.7 million from sham charities and redirected funds to legitimate groups.
  • Generate groundbreaking findings on fundraising, volunteers, equity, and management. researcher at Texas Tech University, for example, explored more than a million e-filed 990s to overturn long-held assumptions about the role of cash in fundraising. He found that when nonprofits encourage noncash gifts as opposed to only cash contributions, financial contributions to those organizations increase over time.
  • Shed light on harmful practices that hurt the poor. A large-scale investigative analysis of nonprofit hospitals’ tax forms revealed that 45 percent of them sent a total of $2.7 billion in medical bills to patients whose incomes were likely low enough to qualify for free or discounted care. When this practice was publicly exposed, some hospitals reevaluated their practices and erased unpaid bills for qualifying patients. The expense of mining data like this previously made such research next to impossible.
  • Help donors make more informed giving decisions. In hopes of maximizing contributions to Ukrainian relief efforts, a record number of donors are turning to resources like Charity Navigator, which can now use open Form 990 data to evaluate and rate a large number of charities based on finances, governance, and other factors. At the same time, donors informed by open 990 data can seek more accountability from the organizations they support. For example, anti-corruption researchers scouring open 990 data and other records uncovered donations by Russian oligarchs aligned with President Putin. This pressured US nonprofits that accepted money from the oligarchs to disavow this funding…(More)”.

Community science draws on the power of the crowd


Essay by Amber Dance: “In community science, also called participatory science, non-professionals contribute their time, energy or expertise to research. (The term ‘citizen science’ is also used but can be perceived as excluding non-citizens.)

Whatever name is used, the approach is more popular than ever and even has journals dedicated to it. The number of annual publications mentioning ‘citizen science’ went from 151 in 2015 to more than 640 in 2021, according to the Web of Science database. Researchers from physiologists to palaeontologists to astronomers are finding that harnessing the efforts of ordinary people is often the best route to the answers they seek.

“More and more funding organizations are actually promoting this type of participatory- and citizen-science data gathering,” says Bálint Balázs, managing director of the Environmental Social Science Research Group in Budapest, a non-profit company focusing on socio-economic research for sustainability.

Community science is also a great tool for outreach, and scientists often delight in interactions with amateur researchers. But it’s important to remember that community science is, foremost, a research methodology like any other, with its own requirements in terms of skill and effort.

“To do a good project, it does require an investment in time,” says Darlene Cavalier, founder of SciStarter, an online clearing house that links research-project leaders with volunteers. “It’s not something where you’re just going to throw up a Google form and hope for the best.” Although there are occasions when scientific data are freely and easily available, other projects create significant costs.

No matter what the topic or approach, people skills are crucial: researchers must identify and cultivate a volunteer community and provide regular feedback or rewards. With the right protocols and checks and balances, the quality of volunteer-gathered data often rivals or surpasses that achieved by professionals.

“There is a two-way learning that happens,” says Tina Phillips, assistant director of the Center for Engagement in Science and Nature at Cornell University in Ithaca, New York. “We all know that science is better when there are more voices, more perspectives.”…(More)”

A Prehistory of Social Media


Essay by Kevin Driscoll: “Over the past few years, I’ve asked dozens of college students to write down, in a sentence or two, where the internet came from. Year after year, they recount the same stories about the US government, Silicon Valley, the military, and the threat of nuclear war. A few students mention the Department of Defense’s ARPANET by name. Several get the chronology wrong, placing the World Wide Web before the internet or expressing confusion about the invention of email. Others mention “tech wizards” or “geniuses” from Silicon Valley firms and university labs. No fewer than four students have simply written, “Bill Gates.”

Despite the internet’s staggering scale and global reach, its folk histories are surprisingly narrow. This mismatch reflects the uncertain definition of “the internet.” When nonexperts look for internet origin stories, they want to know about the internet as they know it, the internet they carry around in their pockets, the internet they turn to, day after day. Yet the internet of today is not a stable object with a single, coherent history. It is a dynamic socio-technical phenomenon that came into being during the 1990s, at the intersection of hundreds of regional, national, commercial, and cooperative networks—only one of which was previously known as “the internet.” In short, the best-known histories describe an internet that hasn’t existed since 1994. So why do my students continue to repeat stories from 25 years ago? Why haven’t our histories kept up?

The standard account of internet history took shape in the early 1990s, as a mixture of commercial online services, university networks, and local community networks mutated into something bigger, more commercial, and more accessible to the general public. As hype began to build around the “information superhighway,” people wanted a backstory. In countless magazines, TV news reports, and how-to books, the origin of the internet was traced back to ARPANET, the computer network created by the Advanced Research Projects Agency during the Cold War. This founding mythology has become a resource for advancing arguments on issues related to censorship, national sovereignty, cybersecurity, privacy, net neutrality, copyright, and more. But with only this narrow history of the early internet to rely on, the arguments put forth are similarly impoverished…(More)”.

Uncovering the genetic basis of mental illness requires data and tools that aren’t just based on white people


Article by Hailiang Huang: “Mental illness is a growing public health problem. In 2019, an estimated 1 in 8 people around the world were affected by mental disorders like depression, schizophrenia or bipolar disorder. While scientists have long known that many of these disorders run in families, their genetic basis isn’t entirely clear. One reason why is that the majority of existing genetic data used in research is overwhelmingly from white people.

In 2003, the Human Genome Project generated the first “reference genome” of human DNA from a combination of samples donated by upstate New Yorkers, all of whom were of European ancestry. Researchers across many biomedical fields still use this reference genome in their work. But it doesn’t provide a complete picture of human genetics. Someone with a different genetic ancestry will have a number of variations in their DNA that aren’t captured by the reference sequence.

When most of the world’s ancestries are not represented in genomic data sets, studies won’t be able to provide a true representation of how diseases manifest across all of humanity. Despite this, ancestral diversity in genetic analyses hasn’t improved in the two decades since the Human Genome Project announced its first results. As of June 2021, over 80% of genetic studies have been conducted on people of European descent. Less than 2% have included people of African descent, even though these individuals have the most genetic variation of all human populations.

To uncover the genetic factors driving mental illness, ISinéad Chapman and our colleagues at the Broad Institute of MIT and Harvard have partnered with collaborators around the world to launch Stanley Global, an initiative that seeks to collect a more diverse range of genetic samples from beyond the U.S. and Northern Europe, and train the next generation of researchers around the world. Not only does the genetic data lack diversity, but so do the tools and techniques scientists use to sequence and analyze human genomes. So we are implementing a new sequencing technology that addresses the inadequacies of previous approaches that don’t account for the genetic diversity of global populations…(More).

Digital Privacy for Reproductive Choice in the Post-Roe Era


Paper by Aziz Z. Huq and Rebecca Wexler: “The overruling of Roe v. Wade unleashed a torrent of regulatory and punitive activity restricting lawful reproductive options. The turn to the expansive criminal law and new schemes of civil liability creates new, and quite different, concerns from the pre-Roe landscape a half-century, ago. Reproductive choice, and its nemesis, rests on information. For pregnant people, deciding on a choice of medical care entails a search for advice and services. Information is at a premium for them. Meanwhile, efforts to regulate abortion begin with clinic closings, but quickly will extend to civil actions and criminal indictments of patients, providers, and those who facilitate abortions. Like the pregnant themselves, criminal and civil enforcers depend on information. And in the contemporary context, the informational landscape, and hence access to counseling and services such as medication abortion, is largely digital. In an era when most people use search engines or social media to access information, the digital architecture and data retention policies of those platforms will determine not only whether the pregnant can access medically accurate advice but also whether the mere act of doing so places them in legal peril.

This Article offers the first comprehensive accounting of abortion-related digital privacy after the end of Roe. It demonstrates first that digital privacy for pregnant persons in the United States has suddenly become a tremendously fraught and complex question. It then maps the treacherous social, legal and economic terrain upon which firms, individuals, and states will make privacy related decisions. Building on this political economy, we develop a moral and economic argument to the effect that digital firms should maximize digital privacy for pregnant persons within the scope of the law, and should actively resist restrictionist states’ efforts to instrumentalize them into their war on reproductive choice. We then lay out precise, tangible steps that firms should take to enact this active resistance, explaining in particular a range of powerful yet legal options for firms to refuse cooperation with restrictionist criminal and civil investigations. Finally, we present an original, concrete and immediately actionable proposal for federal and state legislative intervention: a statutory evidentiary privilege to shield abortion-relevant data from restrictionist warrants, subpoenas, court orders, and judicial proceedings…(More)”

Cloud labs and remote research aren’t the future of science – they’re here


Article by Tom Ireland: “Cloud labs mean anybody, anywhere can conduct experiments by remote control, using nothing more than their web browser. Experiments are programmed through a subscription-based online interface – software then coordinates robots and automated scientific instruments to perform the experiment and process the data. Friday night is Emerald’s busiest time of the week, as scientists schedule experiments to run while they relax with their families over the weekend.

There are still some things robots can’t do, for example lifting giant carboys (containers for liquids) or unwrapping samples sent by mail, and there are a few instruments that just can’t be automated. Hence the people in blue coats, who look a little like pickers in an Amazon warehouse. It turns out that they are, in fact, mostly former Amazon employees.

Plugging an experiment into a browser forces researchers to translate the exact details of every step into unambiguous code

Emerald originally employed scientists and lab technicians to help the facility run smoothly, but they were creatively stifled with so little to do. Poaching Amazon employees has turned out to be an improvement. “We pay them twice what they were getting at Amazon to do something way more fulfilling than stuffing toilet paper into boxes,” says Frezza. “You’re keeping someone’s drug-discovery experiment running at full speed.”

Further south in the San Francisco Bay Area are two more cloud labs, run by the company Strateos. Racks of gleaming life science instruments – incubators, mixers, mass spectrometers, PCR machines – sit humming inside large Perspex boxes known as workcells. The setup is arguably even more futuristic than at Emerald. Here, reagents and samples whizz to the correct workcell on hi-tech magnetic conveyor belts and are gently loaded into place by dextrous robot arms. Researchers’ experiments are “delocalised”, as Strateos’s executive director of operations, Marc Siladi, puts it…(More)”.

The wealth of (Open Data) nations? Open government data, country-level institutions and entrepreneurial activity


Paper by Franz Huber, Alan Ponce, Francesco Rentocchini & Thomas Wainwright: “Lately, Open Data (OD) has been promoted by governments around the world as a resource to accelerate innovation within entrepreneurial ventures . However,it remains unclear to what extent OD drives innovative entrepreneurship. This paper sheds light on this open question by providing novel empirical evidence on the relationship between OD publishing and (digital) entrepreneurship at the country-level. We draw upon a longitudinal dataset comprising 90 countries observed over the period 2013–2016. We find a significant and positive association between OD publishing and entrepreneurship at the country level. The results also show that OD publishing and entrepreneurship is strong in countries with high institutional quality. We argue that publishing OD is not sufficient to improve innovative entrepreneurship alone, so states need to move beyond a focus on OD initiatives and promotion, to focus on a broader set of policy initiatives that promote good governance…(More)”.