Ten computer codes that transformed science


Jeffrey M. Perkel at Nature: “From Fortran to arXiv.org, these advances in programming and platforms sent biology, climate science and physics into warp speed….In 2019, the Event Horizon Telescope team gave the world the first glimpse of what a black hole actually looks like. But the image of a glowing, ring-shaped object that the group unveiled wasn’t a conventional photograph. It was computed — a mathematical transformation of data captured by radio telescopes in the United States, Mexico, Chile, Spain and the South Pole1. The team released the programming code it used to accomplish that feat alongside the articles that documented its findings, so the scientific community could see — and build on — what it had done.

It’s an increasingly common pattern. From astronomy to zoology, behind every great scientific finding of the modern age, there is a computer. Michael Levitt, a computational biologist at Stanford University in California who won a share of the 2013 Nobel Prize in Chemistry for his work on computational strategies for modelling chemical structure, notes that today’s laptops have about 10,000 times the memory and clock speed that his lab-built computer had in 1967, when he began his prizewinning work. “We really do have quite phenomenal amounts of computing at our hands today,” he says. “Trouble is, it still requires thinking.”

Enter the scientist-coder. A powerful computer is useless without software capable of tackling research questions — and researchers who know how to write it and use it. “Research is now fundamentally connected to software,” says Neil Chue Hong, director of the Software Sustainability Institute, headquartered in Edinburgh, UK, an organization dedicated to improving the development and use of software in science. “It permeates every aspect of the conduct of research.”

Scientific discoveries rightly get top billing in the media. But Nature this week looks behind the scenes, at the key pieces of code that have transformed research over the past few decades.

Although no list like this can be definitive, we polled dozens of researchers over the past year to develop a diverse line-up of ten software tools that have had a big impact on the world of science. You can weigh in on our choices at the end of the story….(More)”.

Digital contention in a divided society


Book by Paul Reilly: “How are platforms such as Facebook and Twitter used by citizens to frame contentious parades and protests in ‘post-conflict’ Northern Ireland? What do these contentious episodes tell us about the potential of information and communication technologies to promote positive intergroup contact in the deeply divided society?

These issues are addressed in what is the first in-depth qualitative exploration of how social media were used during the union flag protests (December 2012-March 2013) and the Ardoyne parade disputes (July 2014 and 2015). The book focuses on the extent to which affective publics, mobilised and connected via expressions of solidarity on social media, appear to escalate or de-escalate sectarian tensions caused by these hybrid media events. It also explores whether citizen activity on these online platforms has the potential to contribute to peacebuilding in Northern Ireland….(More)”.

Guide to Good Practice on the Use of New Technologies for the Administration of Justice


Report by México Evalúa: “This document offers a brief review of decisions, initiatives and implementation processes of various policies designed by the judiciary to incorporate the use of new technologies in their work. We are interested in highlighting the role that these tools can play not only in diversifying the means through which the public accesses the service of imparting justice, but also in facilitating and improving the organization of work in the courts and tribunals. We also analyzed the way in which the application of certain technological developments in justiciary tasks, in particular tele or videoconferences, has redefined the traditional structure of the judicial proceeding by allowing remote, simultaneous and collective interaction of the subjects involved. We also reflect on the dilemmas, viability and not always intended effects of the use of new technologies in the administration of justice.

(…)

We chose to analyze them from the focus of the procedural moment in which they intervene, that is, from the user’s perspective, because although technological solutions may have a wide range of objectives, it seems to us that, behind any technological development, the goal of facilitating, expanding and improving citizens’ access to justice should always prevail. We report several experiences aimed at reorganizing the processing of legal proceedings in the various phases that structure them, from the activation stage procedural (filing of lawsuit or judicialization of a criminal investigation) to the execution of court rulings (judgments, arbitral awards), passing through the processing of cases (hearings, proceedings). We would like to emphasize that access to justice includes everything from the processing of cases to the timely enforcement of court rulings. That vision can be summarized with the following figure:…(More)”.

Digital Technology and Democratic Theory


Book edited by Lucy Bernholz, Helene Landemore, and Rob Reich: “One of the most far-reaching transformations in our era is the wave of digital technologies rolling over—and upending—nearly every aspect of life. Work and leisure, family and friendship, community and citizenship have all been modified by now-ubiquitous digital tools and platforms. Digital Technology and Democratic Theory looks closely at one significant facet of our rapidly evolving digital lives: how technology is radically changing our lives as citizens and participants in democratic governments.


To understand these transformations, this book brings together contributions by scholars from multiple disciplines to wrestle with the question of how digital technologies shape, reshape, and affect fundamental questions about democracy and democratic theory. As expectations have whiplashed—from Twitter optimism in the wake of the Arab Spring to Facebook pessimism in the wake of the 2016 US election—the time is ripe for a more sober and long-term assessment. How should we take stock of digital technologies and their promise and peril for reshaping democratic societies and institutions? To answer, this volume broaches the most pressing technological changes and issues facing democracy as a philosophy and an institution….(More)”.

Enabling the future of academic research with the Twitter API


Twitter Developer Blog: “When we introduced the next generation of the Twitter API in July 2020, we also shared our plans to invest in the success of the academic research community with tailored solutions that better serve their goals. Today, we’re excited to launch the Academic Research product track on the new Twitter API. 

Why we’re launching this & how we got here

Since the Twitter API was first introduced in 2006, academic researchers have used data from the public conversation to study topics as diverse as the conversation on Twitter itself – from state-backed efforts to disrupt the public conversation to floods and climate change, from attitudes and perceptions about COVID-19 to efforts to promote healthy conversation online. Today, academic researchers are one of the largest groups of people using the Twitter API. 

Our developer platform hasn’t always made it easy for researchers to access the data they need, and many have had to rely on their own resourcefulness to find the right information. Despite this, for over a decade, academic researchers have used Twitter data for discoveries and innovations that help make the world a better place.

Over the past couple of years, we’ve taken iterative steps to improve the experience for researchers, like when we launched a webpage dedicated to Academic Research, and updated our Twitter Developer Policy to make it easier to validate or reproduce others’ research using Twitter data.

We’ve also made improvements to help academic researchers use Twitter data to advance their disciplines, answer urgent questions during crises, and even help us improve Twitter. For example, in April 2020, we released the COVID-19 stream endpoint – the first free, topic-based stream built solely for researchers to use data from the global conversation for the public good. Researchers from around the world continue to use this endpoint for a number of projects.

Over two years ago, we started our own extensive research to better understand the needs, constraints and challenges that researchers have when studying the public conversation. In October 2020, we tested this product track in a private beta program where we gathered additional feedback. This gave us a glimpse into some of the important work that the free Academic Research product track we’re launching today can now enable….(More)”.

Facebook will let researchers study how advertisers targeted users with political ads prior to Election Day


Nick Statt at The Verge: “Facebook is aiming to improve transparency around political advertising on its platform by opening up more data to independent researchers, including targeting information on more than 1.3 million ads that ran in the three months prior to the US election on November 3rd of last year. Researchers interested in studying the ads can apply for access to the Facebook Open Research and Transparency (FORT) platform here.

The move is significant because Facebook has long resisted willfully allowing access to data around political advertising, often citing user privacy. The company has gone so far as to even disable third-party web plugins, like ProPublica’s Facebook Political Ad Collector tool, that collect such data without Facebook’s express consent.

Numerous research groups around the globe have spent years now studying Facebook’s impact on everything from democratic elections to news dissemination, but sometimes without full access to all the desired data. Only last year, after partnering with Harvard University’s Social Science One (the group overseeing applications for the new political ad targeting initiative), did Facebook better formalize the process of granting anonymized user data for research studies.

In the past, Facebook has made some crucial political ad information in its Ad Library available to the public, including the amount spent on certain ads and demographic information about who saw those ads. But now the company says it wants to do more to improve transparency, specifically around how advertisers target certain subsets of users with political advertising….(More)”.

The High Price of Mistrust


fs.blog: “There are costs to falling community participation. Rather than simply lamenting the loss of a past golden era (as people have done in every era), Harvard political scientist Robert D. Putnam explains these costs, as well as how we might bring community participation back.

First published twenty years ago, Bowling Alone is an exhaustive, hefty work. In its 544 pages, Putnam negotiated mountains of data to support his thesis that the previous few decades had seen Americans retreat en masse from public life. Putnam argued Americans had become disconnected from their wider communities, as evidenced by changes such as a decline in civic engagement and dwindling membership rates for groups such as bowling leagues and PTAs.

Though aspects of Bowling Alone are a little dated today (“computer-mediated communication” isn’t a phrase you’re likely to have heard recently), a quick glance at 2021’s social landscape would suggest many of the trends Putnam described have only continued and apply in other parts of the world too.

Right now, polarization and social distancing have forced us apart from any sense of community to a degree that can seem irresolvable.

Will we ever bowl in leagues alongside near strangers and turn them into friends again? Will we ever bowl again at all, even if alone, or will those gleaming aisles, too-tight shoes, and overpriced sodas fade into a distant memory we recount to our children?

The idea of going into a public space for a non-essential reason can feel incredibly out of reach for many of us right now. And who knows how spaces like bowling alleys will survive in the long run without the social scenes that fuelled them. Now is a perfect time to revisit Bowling Alone to see what it can still teach us, because many of its warnings and lessons are perhaps more relevant now than at its time of publication.

One key lesson we can derive from Bowling Alone is that the less we trust each other—something which is both a cause and consequence of declining community engagement—the more it costs us. Mistrust is expensive.…(More)”

These crowdsourced maps will show exactly where surveillance cameras are watching


Mark Sullivan at FastCompany: “Amnesty International is producing a map of all the places in New York City where surveillance cameras are scanning residents’ faces.

The project will enlist volunteers to use their smartphones to identify, photograph, and locate government-owned surveillance cameras capable of shooting video that could be matched against people’s faces in a database through AI-powered facial recognition.

The map that will eventually result is meant to give New Yorkers the power of information against an invasive technology the usage of which and purpose is often not fully disclosed to the public. It’s also meant to put pressure on the New York City Council to write and pass a law restricting or banning it. Other U.S. cities, such as Boston, Portland, and San Francisco, have already passed such laws.

Facial recognition technology can be developed by scraping millions of images from social media profiles and driver’s licenses without people’s consent, Amnesty says. Software from companies like Clearview AI can then use computer vision algorithms to match those images against facial images captured by closed-circuit television (CCTV) or other video surveillance cameras and stored in a database.

Starting in May, volunteers will be able to use a software tool to identify all the facial recognition cameras within their view—like at an intersection where numerous cameras can often be found. The tool, which runs on a phone’s browser, lets users place a square around any cameras they see. The software integrates Google Street View and Google Earth to help volunteers label and attach geolocation data to the cameras they spot.

The map is part of a larger campaign called “Ban the Scan” that’s meant to educate people around the world on the civil rights dangers of facial recognition. Research has shown that facial recognition systems aren’t as accurate when it comes to analyzing dark-skinned faces, putting Black people at risk of being misidentified. Even when accurate, the technology exacerbates systemic racism because it is disproportionately used to identify people of color, who are already subject to discrimination by law enforcement officials. The campaign is sponsored by Amnesty in partnership with a number of other tech advocacy, privacy, and civil liberties groups.

In the initial phase of the project, which was announced last Thursday, Amnesty and its partners launched a website that New Yorkers can use to generate public comments on the New York Police Department’s (NYPD’s) use of facial recognition….(More)”.

Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it.


Elizabeth Dwoskin at the Washington Post: “A pilot program called Birdwatch lets selected users write corrections and fact checks on potentially misleading tweets…

The presidential election is over, but the fight against misinformation continues.

The latest volley in that effort comes from Twitter, which on MondayannouncedBirdwatch, a pilot project that uses crowdsourcing techniques to combat falsehoods and misleading statements on its service.

The pilot, which is open to only about 1,000 select users who can apply to be contributors, will allow people to write notes with corrections and accurate information directly into misleading tweets — a method that has the potential to get quality information to people more quickly than traditional fact-checking. Fact checks that are rated by other contributors as high quality may get bumped up or rewarded with greater visibility.

Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.

The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed….(More)”

A recommendation and risk classification system for connecting rough sleepers to essential outreach services


Paper by Harrison Wilde et al: “Rough sleeping is a chronic experience faced by some of the most disadvantaged people in modern society. This paper describes work carried out in partnership with Homeless Link (HL), a UK-based charity, in developing a data-driven approach to better connect people sleeping rough on the streets with outreach service providers. HL’s platform has grown exponentially in recent years, leading to thousands of alerts per day during extreme weather events; this overwhelms the volunteer-based system they currently rely upon for the processing of alerts. In order to solve this problem, we propose a human-centered machine learning system to augment the volunteers’ efforts by prioritizing alerts based on the likelihood of making a successful connection with a rough sleeper. This addresses capacity and resource limitations whilst allowing HL to quickly, effectively, and equitably process all of the alerts that they receive. Initial evaluation using historical data shows that our approach increases the rate at which rough sleepers are found following a referral by at least 15% based on labeled data, implying a greater overall increase when the alerts with unknown outcomes are considered, and suggesting the benefit in a trial taking place over a longer period to assess the models in practice. The discussion and modeling process is done with careful considerations of ethics, transparency, and explainability due to the sensitive nature of the data involved and the vulnerability of the people that are affected….(More)”.