Small-Scale Deliberation and Mass Democracy: A Systematic Review of the Spillover Effects of Deliberative Minipublics


Paper by Ramon van der Does and Vincent Jacquet: “Deliberative minipublics are popular tools to address the current crisis in democracy. However, it remains ambiguous to what degree these small-scale forums matter for mass democracy. In this study, we ask the question to what extent minipublics have “spillover effects” on lay citizens—that is, long-term effects on participating citizens and effects on non-participating citizens. We answer this question by means of a systematic review of the empirical research on minipublics’ spillover effects published before 2019. We identify 60 eligible studies published between 1999 and 2018 and provide a synthesis of the empirical results. We show that the evidence for most spillover effects remains tentative because the relevant body of empirical evidence is still small. Based on the review, we discuss the implications for democratic theory and outline several trajectories for future research…(More)”.

Ethics and governance of artificial intelligence for health


The WHO guidance…”on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health.  While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use…(More)”

Pooling society’s collective intelligence helped fight COVID – it must help fight future crises too


Aleks Berditchevskaia and Kathy Peach at The Conversation: “A Global Pandemic Radar is to be created to detect new COVID variants and other emerging diseases. Led by the WHO, the project aims to build an international network of surveillance hubs, set up to share data that’ll help us monitor vaccine resistance, track diseases and identify new ones as they emerge.

This is undeniably a good thing. Perhaps more than any event in recent memory, the COVID pandemic has brought home the importance of pooling society’s collective intelligence and finding new ways to share that combined knowledge as quickly as possible.

At its simplest, collective intelligence is the enhanced capacity that’s created when diverse groups of people work together, often with the help of technology, to mobilise more information, ideas and knowledge to solve a problem. Digital technologies have transformed what can be achieved through collective intelligence in recent years – connecting more of us, augmenting human intelligence with machine intelligence, and helping us to generate new insights from novel sources of data.

So what have we learned over the last 18 months of collective intelligence pooling that can inform the Global Pandemic Radar? Building from the COVID crisis, what lessons will help us perfect disease surveillance and respond better to future crises?…(More)”

Linux Foundation unveils new permissive license for open data collaboration


VentureBeat: “The Linux Foundation has announced a new permissive license designed to help foster collaboration around open data for artificial intelligence (AI) and machine learning (ML) projects.

Data may be the new oil, but for AI and ML projects, having access to expansive and diverse datasets is key to reducing bias and building powerful models capable of all manner of intelligent tasks. For machines, data is a little like “experience” is for humans — the more of it you have, the better decisions you are likely to make.

With CDLA-Permissive-2.0, the Linux Foundation is building on its previous efforts to encourage data-sharing through licensing arrangements that clearly define how the data — and any derivative datasets — can and can’t be used.

The Linux Foundation introduced the Community Data License Agreement (CDLA) in 2017 to entice organizations to open up their vast pools of (underused) data to third parties. There were two original licenses, a sharing license with a “copyleft” reciprocal commitment borrowed from the open source software sphere, stipulating that any derivative datasets built from the original dataset must be shared under a similar license, and a permissive license (1.0) without any such obligations in place (much as “true” open source software might be defined).

Licenses are basically legal documents that outline how a piece of work (in this case datasets) can be used or modified, but specific phrases, ambiguities, or exceptions can often be enough to spook companies if they think releasing content under a specific license could cause them problems down the line. This is where the CDLA-Permissive-2.0 license comes into play — it’s essentially a rewrite of version 1.0 but shorter and simpler to follow. Going further, it has removed certain provisions that were deemed unnecessary or burdensome and may have hindered broader use of the license.

For example, version 1.0 of the license included obligations that data recipients preserve attribution notices in the datasets. For context, attribution notices or statements are standard in the software sphere, where a company that releases software built on open source components has to credit the creators of these components in its own software license. But the Linux Foundation said feedback it received from the community and lawyers representing companies involved in open data projects pointed to challenges around associating attributions with data (or versions of datasets).

So while data source attribution is still an option, and might make sense for specific projects — particularly where transparency is paramount — it is no longer a condition for businesses looking to share data under the new permissive license. The chief remaining obligation is that the main community data license agreement text be included with the new datasets…(More)”.

Spies Like Us: The Promise and Peril of Crowdsourced Intelligence


Book Review by Amy Zegart of “We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News” by Eliot Higgins: “On January 6, throngs of supporters of U.S. President Donald Trump rampaged through the U.S. Capitol in an attempt to derail Congress’s certification of the 2020 presidential election results. The mob threatened lawmakers, destroyed property, and injured more than 100 police officers; five people, including one officer, died in circumstances surrounding the assault. It was the first attack on the Capitol since the War of 1812 and the first violent transfer of presidential power in American history.

Only a handful of the rioters were arrested immediately. Most simply left the Capitol complex and disappeared into the streets of Washington. But they did not get away for long. It turns out that the insurrectionists were fond of taking selfies. Many of them posted photos and videos documenting their role in the assault on Facebook, Instagram, Parler, and other social media platforms. Some even earned money live-streaming the event and chatting with extremist fans on a site called DLive. 

Amateur sleuths immediately took to Twitter, self-organizing to help law enforcement agencies identify and charge the rioters. Their investigation was impromptu, not orchestrated, and open to anyone, not just experts. Participants didn’t need a badge or a security clearance—just an Internet connection….(More)”.

Metroverse


About: “Metroverse is an urban economy navigator built at the Growth Lab at Harvard University. It is based on over a decade of research on how economies grow and diversify and offers a detailed look into the specialization patterns of cities.

As a dynamic resource, the tool is continually evolving with new data and features to help answer questions such as:

  • What is the economic composition of my city?
  • How does my city compare to cities around the globe?
  • Which cities look most like mine?
  • What are the technological capabilities that underpin my city’s current economy?
  • Which growth and diversification paths does that suggest for the future?

As city leaders, job seekers, investors and researchers grapple with 21st century urbanization challenges, the answer to these questions are fundamental to understanding the potential of a city.

Metroverse delivers new insights on these questions by placing a city’s technological capabilities and knowhow at the heart of its growth prospects, where the range and nature of existing capabilities strongly influences how future diversification unfolds. Metroverse makes visible what a city is good at today to help understand what it can become tomorrow…(More)”.

To regulate AI, try playing in a sandbox


Article by Dan McCarthy: “For an increasing number of regulators, researchers, and tech developers, the word “sandbox” is just as likely to evoke rulemaking and compliance as it is to conjure images of children digging, playing, and building. Which is kinda the point.

That’s thanks to the rise of regulatory sandboxes, which allow organizations to develop and test new technologies in a low-stakes, monitored environment before rolling them out to the general public. 

Supporters, from both the regulatory and the business sides, say sandboxes can strike the right balance of reining in potentially harmful technologies without kneecapping technological progress. They can also help regulators build technological competency and clarify how they’ll enforce laws that apply to tech. And while regulatory sandboxes originated in financial services, there’s growing interest in using them to police artificial intelligence—an urgent task as AI is expanding its reach while remaining largely unregulated. 

Even for all of its promise, experts told us, the approach should be viewed not as a silver bullet for AI regulation, but instead as a potential step in the right direction. 

Rashida Richardson, an AI researcher and visiting scholar at Rutgers Law School, is generally critical of AI regulatory sandboxes, but still said “it’s worth testing out ideas like this, because there is not going to be any universal model to AI regulation, and to figure out the right configuration of policy, you need to see theoretical ideas in practice.” 

But waiting for the theoretical to become concrete will take time. For example, in April, the European Union proposed AI regulation that would establish regulatory sandboxes to help the EU achieve its aim of responsible AI innovation, mentioning the word “sandbox” 38 times, compared to related terms like “impact assessment” (13 mentions) and “audit” (four). But it will likely take years for the EU’s proposal to become law. 

In the US, some well-known AI experts are working on an AI sandbox prototype, but regulators are not yet in the picture. However, the world’s first and (so far) only AI-specific regulatory sandbox did roll out in Norway this March, as a way to help companies comply with AI-specific provisions of the EU’s General Data Protection Regulation (GDPR). The project provides an early window into how the approach can work in practice.

“It’s a place for mutual learning—if you can learn earlier in the [product development] process, that is not only good for your compliance risk, but it’s really great for building a great product,” according to Erlend Andreas Gjære, CEO and cofounder of Secure Practice, an information security (“infosec”) startup that is one of four participants in Norway’s new AI regulatory sandbox….(More)”

Scientific publishing’s new weapon for the next crisis: the rapid correction


Gideon Meyerowitz-Katz and James Heathers at STATNews: “If evidence of errors does emerge, the process for correcting or withdrawing a paper tends to be alarmingly long. Late last year, for example, David Cox, the IBM director of the MIT-IBM Watson AI Lab, discovered that his name was included as an author on two papers he had never written. After he wrote to the journals involved, it took almost three months for them to remove his name and the papers themselves. In cases of large-scale research fraud, correction times can be measured in years.

Imagine now that the issue with a manuscript is not a simple matter of retracting a fraudulent paper, but a more complex methodological or statistical problem that undercuts the study’s conclusions. In this context, requests for clarification — or retraction — can languish for years. The process can outlast both the tenure of the responsible editor, resetting the clock on the entire ordeal, or the journal itself can cease publication, leaving an erroneous article in the public domain without oversight, forever….

This situation must change, and change quickly. Any crisis that requires scientific information in a hurry will produce hurried science, and hurried science often includes miscalculated analyses, poor experimental design, inappropriate statistical models, impossible numbers, or even fraud. Having the agility to produce and publicize work like this without having the ability to correct it just as quickly is a curiously persistent oversight in the global scientific enterprise. If corrections occur only long after the research has already been used to treat people across the world, what use are they at all?

There are some small steps in the right direction. The open-source website PubPeer aggregates formal scientific criticism, and when shoddy research makes it into the literature, hordes of critics may leave comments and questions on the site within hours. Twitter, likewise, is often abuzz with spectacular scientific critiques almost as soon as studies go up online.

But these volunteer efforts are not enough. Even when errors are glaring and obvious, the median response from academic journals is to deal with them grudgingly or not at all. Academia in general takes a faintly disapproving tone of crowd-sourced error correction, ignoring the fact that it is often the only mechanism that exists to do this vital work.

Scientific publishing needs to stop treating error-checking as a slightly inconvenient side note and make it a core part of academic research. In a perfect world, entire departmental sections would be dedicated to making sure that published research is correct and reliable. But even a few positions would be a fine start. Young researchers could be given kudos not just for every citation in their Google scholar profile but also for every post-publication review they undertake….(More)”

When Graphs Are a Matter of Life and Death


Essay by  Hannah Fry at the NewYorker: “John Carter has only an hour to decide. The most important auto race of the season is looming; it will be broadcast live on national television and could bring major prize money. If his team wins, it will get a sponsorship deal and a chance to start making some real profits for a change.

There’s just one problem. In seven of the past twenty-four races, the engine in the Carter Racing car has blown out. An engine failure live on TV will jeopardize sponsorships—and the driver’s life. But withdrawing has consequences, too. The wasted entry fee means finishing the season in debt, and the team won’t be happy about the missed opportunity for glory. As Burns’s First Law of Racing says, “Nobody ever won a race sitting in the pits.”

One of the engine mechanics has a hunch about what’s causing the blowouts. He thinks that the engine’s head gasket might be breaking in cooler weather. To help Carter decide what to do, a graph is devised that shows the conditions during each of the blowouts: the outdoor temperature at the time of the race plotted against the number of breaks in the head gasket. The dots are scattered into a sort of crooked smile across a range of temperatures from about fifty-five degrees to seventy-five degrees.

When Graphs Are a Matter of Life and Death

The upcoming race is forecast to be especially cold, just forty degrees, well below anything the cars have experienced before. So: race or withdraw?

This case study, based on real data, and devised by a pair of clever business professors, has been shown to students around the world for more than three decades. Most groups presented with the Carter Racing story look at the scattered dots on the graph and decide that the relationship between temperature and engine failure is inconclusive. Almost everyone chooses to race. Almost no one looks at that chart and asks to see the seventeen missing data points—the data from those races which did not end in engine failure.

Image may contain Plot

As soon as those points are added, however, the terrible risk of a cold race becomes clear. Every race in which the engine behaved properly was conducted when the temperature was higher than sixty-five degrees; every single attempt that occurred in temperatures at or below sixty-five degrees resulted in engine failure. Tomorrow’s race would almost certainly end in catastrophe.

One more twist: the points on the graph are real but have nothing to do with auto racing. The first graph contains data compiled the evening before the disastrous launch of the space shuttle Challenger, in 1986….(More)”.

Examining the Intersection of Behavioral Science and Advocacy


Introduction to Special Collection of the Behavioral Scientist by Cintia Hinojosa and Evan Nesterak: “Over the past year, everyone’s lives have been touched by issues that intersect science and advocacy—the pandemic, climate change, police violence, voting, protests, the list goes on. 

These issues compel us, as a society and individuals, toward understanding. We collect new data, design experiments, test our theories. They also inspire us to examine our personal beliefs and values, our roles and responsibilities as individuals within society. 

Perhaps no one feels these forces more than social and behavioral scientists. As members of fields dedicated to the study of social and behavioral phenomena, they are in the unique position of understanding these issues from a scientific perspective, while also navigating their inevitable personal impact. This dynamic brings up questions about the role of scientists in a changing world. To what extent should they engage in advocacy or activism on social and political issues? Should they be impartial investigators, active advocates, something in between? 

t also raises other questions, like does taking a public stance on an issue affect scientific integrity? How should scientists interact with those setting policies? What happens when the lines between an evidence-based stance and a political position become blurred? What should scientists do when science itself becomes a partisan issue? 

To learn more about how social and behavioral scientists are navigating this terrain, we put out a call inviting them to share their ideas, observations, personal reflections, and the questions they’re grappling with. We gave them 100-250 words to share what was on their mind. Not easy for such a complex and consequential topic.

The responses, collected and curated below, revealed a number of themes, which we’ve organized into two parts….(More)”.