Paper by Carla Bonina, Kari Koskinen, Ben Eaton, and Annabelle Gawer: “Digital platforms hold a central position in today’s world economy and are said to offer a great potential for the economies and societies in the global South. Yet, to date, the scholarly literature on digital platforms has largely concentrated on business while their developmental implications remain understudied. In part, this is because digital platforms are a challenging research object due to their lack of conceptual definition, their spread across different regions and industries, and their intertwined nature with institutions, actors and digital technologies. The purpose of this article is to contribute to the ongoing debate in information systems and ICT4D research to understand what digital platforms mean for development. To do so, we first define what digital platforms are and differentiate between transaction and innovation platforms, and explain their key characteristics in terms of purpose, research foundations, material properties and business models. We add the socio‐technical context digital platforms operate and the linkages to developmental outcomes. We then conduct an extensive review to explore what current areas, developmental goals, tensions and issues emerge in the literature on platforms and development and identify relevant gaps in our knowledge. We later elaborate on six research questions to advance the studies on digital platforms for development: on indigenous innovation, digital platforms and institutions, on exacerbation of inequalities, on alternative forms of value, on the dark side of platforms and on the applicability of the platform typology for development….(More)”.
Using “Big Data” to forecast migration
Blog Post by Jasper Tjaden, Andres Arau, Muertizha Nuermaimaiti, Imge Cetin, Eduardo Acostamadiedo, Marzia Rango: Act 1 — High Expectations
“Data is the new oil,” they say. ‘Big Data’ is even bigger than that. The “data revolution” will contribute to solving societies’ problems and help governments adopt better policies and run more effective programs. In the migration field, digital trace data are seen as a potentially powerful tool to improve migration management processes (visa applications; asylum decision and geographic allocation of asylum seeker, facilitating integration, “smart borders” etc.).1
Forecasting migration is one particular area where big data seems to excite data nerds (like us) and policymakers alike. If there is one way big data has already made a difference, it is its ability to bring different actors together — data scientists, business people and policy makers — to sit through countless slides with numbers, tables and graphs. Traditional migration data sources, like censuses, administrative data and surveys, have never quite managed to generate the same level of excitement.
Many EU countries are currently heavily investing in new ways to forecast migration. Relatively large numbers of asylum seekers in 2014, 2015 and 2016 strained the capacity of many EU governments. Better forecasting tools are meant to help governments prepare in advance.
In a recent European Migration Network study, 10 out of the 22 EU governments surveyed said they make use of forecasting methods, many using open source data for “early warning and risk analysis” purposes. The 2020 European Migration Network conference was dedicated entirely to the theme of forecasting migration, hosting more than 15 expert presentations on the topic. The recently proposed EU Pact on Migration and Asylum outlines a “Migration Preparedness and Crisis Blueprint” which “should provide timely and adequate information in order to establish the updated migration situational awareness and provide for early warning/forecasting, as well as increase resilience to efficiently deal with any type of migration crisis.” (p. 4) The European Commission is currently finalizing a feasibility study on the use of artificial intelligence for predicting migration to the EU; Frontex — the EU Border Agency — is scaling up efforts to forecast irregular border crossings; EASO — the European Asylum Support Office — is devising a composite “push-factor index” and experimenting with forecasting asylum-related migration flows using machine learning and data at scale. In Fall 2020, during Germany’s EU Council Presidency, the German Interior Ministry organized a workshop series around Migration 4.0 highlighting the benefits of various ways to “digitalize” migration management. At the same time, the EU is investing substantial resources in migration forecasting research under its Horizon2020 programme, including QuantMig, ITFLOWS, and HumMingBird.
Is all this excitement warranted?
Yes, it is….(More)” See also: Big Data for Migration Alliance
Ten computer codes that transformed science
Jeffrey M. Perkel at Nature: “From Fortran to arXiv.org, these advances in programming and platforms sent biology, climate science and physics into warp speed….In 2019, the Event Horizon Telescope team gave the world the first glimpse of what a black hole actually looks like. But the image of a glowing, ring-shaped object that the group unveiled wasn’t a conventional photograph. It was computed — a mathematical transformation of data captured by radio telescopes in the United States, Mexico, Chile, Spain and the South Pole1. The team released the programming code it used to accomplish that feat alongside the articles that documented its findings, so the scientific community could see — and build on — what it had done.
It’s an increasingly common pattern. From astronomy to zoology, behind every great scientific finding of the modern age, there is a computer. Michael Levitt, a computational biologist at Stanford University in California who won a share of the 2013 Nobel Prize in Chemistry for his work on computational strategies for modelling chemical structure, notes that today’s laptops have about 10,000 times the memory and clock speed that his lab-built computer had in 1967, when he began his prizewinning work. “We really do have quite phenomenal amounts of computing at our hands today,” he says. “Trouble is, it still requires thinking.”
Enter the scientist-coder. A powerful computer is useless without software capable of tackling research questions — and researchers who know how to write it and use it. “Research is now fundamentally connected to software,” says Neil Chue Hong, director of the Software Sustainability Institute, headquartered in Edinburgh, UK, an organization dedicated to improving the development and use of software in science. “It permeates every aspect of the conduct of research.”
Scientific discoveries rightly get top billing in the media. But Nature this week looks behind the scenes, at the key pieces of code that have transformed research over the past few decades.
Although no list like this can be definitive, we polled dozens of researchers over the past year to develop a diverse line-up of ten software tools that have had a big impact on the world of science. You can weigh in on our choices at the end of the story….(More)”.
Digital contention in a divided society
Book by Paul Reilly: “How are platforms such as Facebook and Twitter used by citizens to frame contentious parades and protests in ‘post-conflict’ Northern Ireland? What do these contentious episodes tell us about the potential of information and communication technologies to promote positive intergroup contact in the deeply divided society?
These issues are addressed in what is the first in-depth qualitative exploration of how social media were used during the union flag protests (December 2012-March 2013) and the Ardoyne parade disputes (July 2014 and 2015). The book focuses on the extent to which affective publics, mobilised and connected via expressions of solidarity on social media, appear to escalate or de-escalate sectarian tensions caused by these hybrid media events. It also explores whether citizen activity on these online platforms has the potential to contribute to peacebuilding in Northern Ireland….(More)”.
Guide to Good Practice on the Use of New Technologies for the Administration of Justice
Report by México Evalúa: “This document offers a brief review of decisions, initiatives and implementation processes of various policies designed by the judiciary to incorporate the use of new technologies in their work. We are interested in highlighting the role that these tools can play not only in diversifying the means through which the public accesses the service of imparting justice, but also in facilitating and improving the organization of work in the courts and tribunals. We also analyzed the way in which the application of certain technological developments in justiciary tasks, in particular tele or videoconferences, has redefined the traditional structure of the judicial proceeding by allowing remote, simultaneous and collective interaction of the subjects involved. We also reflect on the dilemmas, viability and not always intended effects of the use of new technologies in the administration of justice.
(…)
We chose to analyze them from the focus of the procedural moment in which they intervene, that is, from the user’s perspective, because although technological solutions may have a wide range of objectives, it seems to us that, behind any technological development, the goal of facilitating, expanding and improving citizens’ access to justice should always prevail. We report several experiences aimed at reorganizing the processing of legal proceedings in the various phases that structure them, from the activation stage procedural (filing of lawsuit or judicialization of a criminal investigation) to the execution of court rulings (judgments, arbitral awards), passing through the processing of cases (hearings, proceedings). We would like to emphasize that access to justice includes everything from the processing of cases to the timely enforcement of court rulings. That vision can be summarized with the following figure:…(More)”.

Digital Technology and Democratic Theory
Book edited by Lucy Bernholz, Helene Landemore, and Rob Reich: “One of the most far-reaching transformations in our era is the wave of digital technologies rolling over—and upending—nearly every aspect of life. Work and leisure, family and friendship, community and citizenship have all been modified by now-ubiquitous digital tools and platforms. Digital Technology and Democratic Theory looks closely at one significant facet of our rapidly evolving digital lives: how technology is radically changing our lives as citizens and participants in democratic governments.
To understand these transformations, this book brings together contributions by scholars from multiple disciplines to wrestle with the question of how digital technologies shape, reshape, and affect fundamental questions about democracy and democratic theory. As expectations have whiplashed—from Twitter optimism in the wake of the Arab Spring to Facebook pessimism in the wake of the 2016 US election—the time is ripe for a more sober and long-term assessment. How should we take stock of digital technologies and their promise and peril for reshaping democratic societies and institutions? To answer, this volume broaches the most pressing technological changes and issues facing democracy as a philosophy and an institution….(More)”.
Enabling the future of academic research with the Twitter API
Twitter Developer Blog: “When we introduced the next generation of the Twitter API in July 2020, we also shared our plans to invest in the success of the academic research community with tailored solutions that better serve their goals. Today, we’re excited to launch the Academic Research product track on the new Twitter API.
Why we’re launching this & how we got here
Since the Twitter API was first introduced in 2006, academic researchers have used data from the public conversation to study topics as diverse as the conversation on Twitter itself – from state-backed efforts to disrupt the public conversation to floods and climate change, from attitudes and perceptions about COVID-19 to efforts to promote healthy conversation online. Today, academic researchers are one of the largest groups of people using the Twitter API.
Our developer platform hasn’t always made it easy for researchers to access the data they need, and many have had to rely on their own resourcefulness to find the right information. Despite this, for over a decade, academic researchers have used Twitter data for discoveries and innovations that help make the world a better place.
Over the past couple of years, we’ve taken iterative steps to improve the experience for researchers, like when we launched a webpage dedicated to Academic Research, and updated our Twitter Developer Policy to make it easier to validate or reproduce others’ research using Twitter data.
We’ve also made improvements to help academic researchers use Twitter data to advance their disciplines, answer urgent questions during crises, and even help us improve Twitter. For example, in April 2020, we released the COVID-19 stream endpoint – the first free, topic-based stream built solely for researchers to use data from the global conversation for the public good. Researchers from around the world continue to use this endpoint for a number of projects.
Over two years ago, we started our own extensive research to better understand the needs, constraints and challenges that researchers have when studying the public conversation. In October 2020, we tested this product track in a private beta program where we gathered additional feedback. This gave us a glimpse into some of the important work that the free Academic Research product track we’re launching today can now enable….(More)”.
Facebook will let researchers study how advertisers targeted users with political ads prior to Election Day
Nick Statt at The Verge: “Facebook is aiming to improve transparency around political advertising on its platform by opening up more data to independent researchers, including targeting information on more than 1.3 million ads that ran in the three months prior to the US election on November 3rd of last year. Researchers interested in studying the ads can apply for access to the Facebook Open Research and Transparency (FORT) platform here.
The move is significant because Facebook has long resisted willfully allowing access to data around political advertising, often citing user privacy. The company has gone so far as to even disable third-party web plugins, like ProPublica’s Facebook Political Ad Collector tool, that collect such data without Facebook’s express consent.
Numerous research groups around the globe have spent years now studying Facebook’s impact on everything from democratic elections to news dissemination, but sometimes without full access to all the desired data. Only last year, after partnering with Harvard University’s Social Science One (the group overseeing applications for the new political ad targeting initiative), did Facebook better formalize the process of granting anonymized user data for research studies.
In the past, Facebook has made some crucial political ad information in its Ad Library available to the public, including the amount spent on certain ads and demographic information about who saw those ads. But now the company says it wants to do more to improve transparency, specifically around how advertisers target certain subsets of users with political advertising….(More)”.
The High Price of Mistrust
fs.blog: “There are costs to falling community participation. Rather than simply lamenting the loss of a past golden era (as people have done in every era), Harvard political scientist Robert D. Putnam explains these costs, as well as how we might bring community participation back.
First published twenty years ago, Bowling Alone is an exhaustive, hefty work. In its 544 pages, Putnam negotiated mountains of data to support his thesis that the previous few decades had seen Americans retreat en masse from public life. Putnam argued Americans had become disconnected from their wider communities, as evidenced by changes such as a decline in civic engagement and dwindling membership rates for groups such as bowling leagues and PTAs.
Though aspects of Bowling Alone are a little dated today (“computer-mediated communication” isn’t a phrase you’re likely to have heard recently), a quick glance at 2021’s social landscape would suggest many of the trends Putnam described have only continued and apply in other parts of the world too.
Right now, polarization and social distancing have forced us apart from any sense of community to a degree that can seem irresolvable.
Will we ever bowl in leagues alongside near strangers and turn them into friends again? Will we ever bowl again at all, even if alone, or will those gleaming aisles, too-tight shoes, and overpriced sodas fade into a distant memory we recount to our children?
The idea of going into a public space for a non-essential reason can feel incredibly out of reach for many of us right now. And who knows how spaces like bowling alleys will survive in the long run without the social scenes that fuelled them. Now is a perfect time to revisit Bowling Alone to see what it can still teach us, because many of its warnings and lessons are perhaps more relevant now than at its time of publication.
One key lesson we can derive from Bowling Alone is that the less we trust each other—something which is both a cause and consequence of declining community engagement—the more it costs us. Mistrust is expensive.…(More)”
These crowdsourced maps will show exactly where surveillance cameras are watching
Mark Sullivan at FastCompany: “Amnesty International is producing a map of all the places in New York City where surveillance cameras are scanning residents’ faces.
The project will enlist volunteers to use their smartphones to identify, photograph, and locate government-owned surveillance cameras capable of shooting video that could be matched against people’s faces in a database through AI-powered facial recognition.
The map that will eventually result is meant to give New Yorkers the power of information against an invasive technology the usage of which and purpose is often not fully disclosed to the public. It’s also meant to put pressure on the New York City Council to write and pass a law restricting or banning it. Other U.S. cities, such as Boston, Portland, and San Francisco, have already passed such laws.
Facial recognition technology can be developed by scraping millions of images from social media profiles and driver’s licenses without people’s consent, Amnesty says. Software from companies like Clearview AI can then use computer vision algorithms to match those images against facial images captured by closed-circuit television (CCTV) or other video surveillance cameras and stored in a database.
Starting in May, volunteers will be able to use a software tool to identify all the facial recognition cameras within their view—like at an intersection where numerous cameras can often be found. The tool, which runs on a phone’s browser, lets users place a square around any cameras they see. The software integrates Google Street View and Google Earth to help volunteers label and attach geolocation data to the cameras they spot.
The map is part of a larger campaign called “Ban the Scan” that’s meant to educate people around the world on the civil rights dangers of facial recognition. Research has shown that facial recognition systems aren’t as accurate when it comes to analyzing dark-skinned faces, putting Black people at risk of being misidentified. Even when accurate, the technology exacerbates systemic racism because it is disproportionately used to identify people of color, who are already subject to discrimination by law enforcement officials. The campaign is sponsored by Amnesty in partnership with a number of other tech advocacy, privacy, and civil liberties groups.
In the initial phase of the project, which was announced last Thursday, Amnesty and its partners launched a website that New Yorkers can use to generate public comments on the New York Police Department’s (NYPD’s) use of facial recognition….(More)”.