About: “Over the past seven months the team at the Change.org Foundation have been working from home to support campaigns created in response to COVID-19. During this unprecedented time in history, millions of people, more than ever before, used our platform to share their stories and fight for their communities.
The Pandemic Report 2020 is born out of the need to share those stories with the world. We assembled a cross-functional team within the Foundation to dig into our platform data. We spotted trends, followed patterns and learned from the analysis we collected from country teams.
This work started with the hypothesis that the Coronavirus pandemic may have started a new chapter in digital activism history.
The data points to a new era, with the pandemic acting as a catalyst for citizen engagement worldwide….(More)”.
Abeba Birhane at The Elephant: “The African equivalents of Silicon Valley’s tech start-ups can be found in every possible sphere of life around all corners of the continent—in “Sheba Valley” in Addis Abeba, “Yabacon Valley” in Lagos, and “Silicon Savannah” in Nairobi, to name a few—all pursuing “cutting-edge innovations” in sectors like banking, finance, healthcare, and education. They are headed by technologists and those in finance from both within and outside the continent who seemingly want to “solve” society’s problems, using data and AI to provide quick “solutions”. As a result, the attempt to “solve” social problems with technology is exactly where problems arise. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified—matters that can be “fixed” with the latest algorithm.
As dynamic and interactive human activities and processes are automated, they are inherently simplified to the engineers’ and tech corporations’ subjective notions of what they mean. The reduction of complex social problems to a matter that can be “solved” by technology also treats people as passive objects for manipulation. Humans, however, far from being passive objects, are active meaning-seekers embedded in dynamic social, cultural, and historical backgrounds.
The discourse around “data mining”, “abundance of data”, and “data-rich continent” shows the extent to which the individual behind each data point is disregarded. This muting of the individual—a person with fears, emotions, dreams, and hopes—is symptomatic of how little attention is given to matters such as people’s well-being and consent, which should be the primary concerns if the goal is indeed to “help” those in need. Furthermore, this discourse of “mining” people for data is reminiscent of the coloniser’s attitude that declares humans as raw material free for the taking. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified Data is necessarily always about something and never about an abstract entity.
The collection, analysis, and manipulation of data potentially entails monitoring, tracking, and surveilling people. This necessarily impacts people directly or indirectly whether it manifests as change in their insurance premiums or refusal of services. The erasure of the person behind each data point makes it easy to “manipulate behavior” or “nudge” users, often towards profitable outcomes for companies. Considerations around the wellbeing and welfare of the individual user, the long-term social impacts, and the unintended consequences of these systems on society’s most vulnerable are pushed aside, if they enter the equation at all. For companies that develop and deploy AI, at the top of the agenda is the collection of more data to develop profitable AI systems rather than the welfare of individual people or communities. This is most evident in the FinTech sector, one of the prominent digital markets in Africa. People’s digital footprints, from their interactions with others to how much they spend on their mobile top ups, are continually surveyed and monitored to form data for making loan assessments. Smartphone data from browsing history, likes, and locations is recorded forming the basis for a borrower’s creditworthiness.
Artificial Intelligence technologies that aid decision-making in the social sphere are, for the most part, developed and implemented by the private sector whose primary aim is to maximise profit. Protecting individual privacy rights and cultivating a fair society is therefore the least of their concerns, especially if such practice gets in the way of “mining” data, building predictive models, and pushing products to customers. As decision-making of social outcomes is handed over to predictive systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by corporate incentives, we are also allowing moral questions to be dictated by corporate interest.
“Digital nudges”, behaviour modifications developed to suit commercial interests, are a prime example. As “nudging” mechanisms become the norm for “correcting” individuals’ behaviour, eating habits, or exercise routines, those developing predictive models are bestowed with the power to decide what “correct” is. In the process, individuals that do not fit our stereotypical ideas of a “fit body”, “good health”, and “good eating habits” end up being punished, outcast, and pushed further to the margins. When these models are imported as state-of-the-art technology that will save money and “leapfrog” the continent into development, Western values and ideals are enforced, either deliberately or intentionally….(More)”.
Daniel Avelar at Open Democracy: “As Brazil faces one of the worst COVID-19 outbreaks in the world, a smartphone app is helping residents of impoverished areas known as favelas survive the virus threat amid sudden mass unemployment.
So far, the Latin American country has recorded over 115.000 deaths caused by COVID-19. The shutdown of economic activity wiped out 7.8 million jobs, mostly affecting low-skilled informal workers who form the bulk of the population in the favelas. Emergency income distributed by the government is limited to 60% of the minimum wage, so families are struggling to make ends meet.
Many blame president Jair Bolsonaro for the tragedy. Bolsonaro, a far-right populist, has consistently rallied against science-based policies in the management of the pandemic and pushed for an end to stay-at-home orders. A precocious reopening of the economy is likely to increase infection rates and cause more deaths.
In an attempt to stop the looming humanitarian catastrophe, a coalition of activists in the favelas and corporate partners developed an app that is facilitating the distribution of food and emergency income to thousands of women spearheading families. The app has a facial recognition feature that helps volunteers identify and register recipients of aid and prevents fraud.
So far, the Favela Mothers project has distributed the equivalent to US$ 26 million in food parcels and cash allowances to more than 1.1 million families in 5,000 neighborhoods across the country….(More)”.
Open Corporates: “…there are three other aspects which are relevant when talking about access to EU company data.
Cargo-culting GDPR
The first, is a tendency to take this complex and subtle legislation that is GDPR and use a poorly understood version in other legislation and regulation, even if that regulation is already covered by GDPR. This actually undermines the GDPR regime, and prevents it from working effectively, and should strongly be resisted. In the tech world, such approaches are called ‘cargo-culting’.
Similarly GDPR is often used as an excuse for not releasing company information as open data, even when the same data is being sold to third parties apparently without concerns — if one is covered by GDPR, the other certainly should be.
Widened power asymmetries
The second issue is the unintended consequences of GDPR, specifically the way it increases asymmetries of power and agency. For example, something like the so-called Right To Be Forgotten takes very significant resources to implement, and so actually strengthens the position of the giant tech companies — for such companies, investing millions in large teams to decide who should and should not be given the Right To Be Forgotten is just a relatively small cost of doing business.
Another issue is the growth of a whole new industry dedicated to removing traces of people’s past from the internet (2), which is also increasing the asymmetries of power. The vast majority of people are not directors of companies, or beneficial owners, and it is only the relatively rich and powerful (including politicians and criminals) who can afford lawyers to stifle free speech, or remove parts of their past they would rather not be there, from business failures to associations with criminals.
OpenCorporates, for example, was threatened with a lawsuit from a member of one of the wealthiest families in Europe for reproducing a gazette notice from the Luxembourg official gazette (a publication that contains public notices). We refused to back down, believing we had a good case in law and in the public interest, and the other side gave up. But such so-called SLAPP suits are becoming increasingly common, although unlike many US states there are currently no defences in place to resist these in the EU, despite pressure from civil society to address this….
At the same time, the automatic assumption that all Personally Identifiable Information (PII), someone’s name for example, is private is highly problematic, confusing both citizens and policy makers, and further undermining democracies and fair societies. As an obvious case, it’s critical that we know the names of our elected representatives, and those in positions of power, otherwise we would have an opaque society where decisions are made by nameless individuals with opaque agendas and personal interests — such as a leader awarding a contract to their brother’s company, for example.
As the diagram below illustrates, there is some personally identifiable information that it’s strongly in the public interest to know. Take the director or beneficial owner of a company, for example, of course their details are PII — clearly you need to know their name (and other information too), otherwise what actually do you know about them, or the company (only that some unnamed individual has been given special protection under law to be shielded from the company’s debts and actions, and yet can benefit from its profits)?
On the other hand, much of the data which is truly about our privacy — the profiles, inferences and scores that companies store on us — is explicitly outside GDPR, if it doesn’t contain PII.
Hopefully, as awareness of the issues increases, we will develop a more nuanced, deeper, understanding of privacy, such that case law around GDPR, and successors to this legislation begin to rebalance and case law starts to bring clarity to the ambiguities of the GDPR….(More)”.
Hye Jung Han at Politico: “…Education systems across Europe struggled this year with how to determine students’ all-important final grades. But one system, the International Baccalaureate (“IB”) — a high school program that is highly regarded by European universities, and offered by both public and private schools in 152 countries — did something unusual.
Having canceled final exams, which make up the majority of an IB student’s grade, the Geneva-based foundation of the same name hastily built an algorithm that used a student’s coursework scores, predicted grades by teachers and their school’s historical IB results to guess what students might have scored if they had taken their exams in a hypothetical, pandemic-free year. The result of the algorithm became the student’s final grade.
The results were catastrophic. Soon after the grades were released, serious mismatches emerged between expected grades based on a student’s prior performance, and those awarded by the algorithm. Because IB students’ university admissions are contingent upon their final grades, the unexpectedly poor grades generated for some resulted in scholarships and admissionsoffers being revoked…
The IB had alternatives. Instead, it could have used students’ actual academic performance and graded on a generous curve. It could have incorporated practice test grades, third-party moderation to minimize grading bias and teachers’ broad evaluations of student progress.
It could have engaged with universities on flexibly factoring in final grades into this year’s admissions decisions, as universities contemplate opening their now-virtual classes to more students to replace lost revenue.
It increasingly seems like the greatest potential of the power promised by predictive data lies in the realm of misuse.
For this year’s graduating class, who have already responded with grace and resilience in their final year of school, the automating away of their capacity and potential is an unfair and unwanted preview of the world they are graduating into….(More)”.
White Paper by Phoebe Higgins & Timothy Male: “Late in 2017, the United Kingdom’s energy regulator, Ofgem, gave fast approval for a new project allowing residents to buy and sell renewable energy from solar panels and batteries within their own apartment buildings. Normally, this would not be legal since UK energy rules dictate that locally generated energy can only be used by the owner or sold back to the grid at a relatively low price. However, the earlier establishment of a regulatory sandbox for such energy delivery modernizations created a path to try something new and get it approved quickly. In April 2018, only a few months after project initiation, the first peer-to-peer energy trades within apartment complexes started.
Energy policy is not the only space where rules need fast modification to make allowances for all the novelty arising in the world today. The protection and restoration of our water, healthy soil and wildlife resources are static processes, starved for creativity. A United Nations’ panel recently reported on the extinction risks that face more than one million species around the globe. In a 2009 National Rivers and Streams Assessment, the EPA reported that 46 percent of U.S. waterways were in ‘poor’ biological condition, and more than 40 percent were polluted with high levels of nitrogen or phosphorus.
Innovators have big ideas that could help with these problems, but ponderous regulatory systems and older generations of bureaucrats aren’t used to the fast pace of new technologies, tools and products. Often, it is a simple thing—one word or phrase in a policy or regulation—that is a barrier to a new technology or technique being widely used. However, one sentence can be just as hard and slow to change as a whole law. Rather than simply accept this regulatory status quo, we believe in the need to find, nurture and learn from new concepts even when it means deliberately breaking old rules.
Regulatory sandboxes like the one in the United Kingdom open the door to testing new approaches within a controlled environment. While they don’t ensure success, they make it possible for new technologies and tools to be explored in real-world settings. Not just so that innovators can learn, but also to allow government bureaucracies to catch up to the present and adapt to the future. Our planet and country need more opportunities to do this….(More)“
Book edited by Maria Tzanou: “The growth of data collecting goods and services, such as ehealth and mhealth apps, smart watches, mobile fitness and dieting apps, electronic skin and ingestible tech, combined with recent technological developments such as increased capacity of data storage, artificial intelligence and smart algorithms have spawned a big data revolution that has reshaped how we understand and approach health data. Recently the COVID-19 pandemic has foregrounded a variety of data privacy issues. The collection, storage, sharing and analysis of health- related data raises major legal and ethical questions relating to privacy, data protection, profiling, discrimination, surveillance, personal autonomy and dignity.
This book examines health privacy questions in light of the GDPR and the EU’s general data privacy legal framework. The GDPR is a complex and evolving body of law that aims to deal with several technological and societal health data privacy problems, while safeguarding public health interests and addressing its internal gaps and uncertainties. The book answers a diverse range of questions including: What role can the GDPR play in regulating health surveillance and big (health) data analytics? Can it catch up with the Internet age developments? Are the solutions to the challenges posed by big health data to be found in the law? Does the GDPR provide adequate tools and mechanisms to ensure public health objectives and the effective protection of privacy? How does the GDPR deal with data that concern children’s health and academic research?
By analysing a number of diverse questions concerning big health data under the GDPR from various different perspectives, this book will appeal to those interested in privacy, data protection, big data, health sciences, information technology, the GDPR, EU and human rights law….(More)”.
Essay by Yasodara Córdova and Tiago Peixoto: “According to the World Bank’s Digital Dividends report, fewer than 20 percent of digital government projects are successes. Particularly in developing countries, these numbers are often associated with a number of challenges: limited funding, stretched implementation capacity, and political instability, to name a few. Yet, even in developing countries, despite similar conditions, some projects seem to fare better than others. Why is that?
The projects we have worked with in the global south have followed a similar pattern. While there were successes, many projects have failed. We have learned a few things along the way, that we think relate directly to the success or failure of digital government projects. These are not scientific conclusions, they’re personal impressions based on what we’ve seen and experienced.
1. Information first, services afterwards
A basic function of digital government is the provision of actionable information concerning public services, by they online or offline (e.g. opening hours, documents required for services, and so on). Even more so in developing countries, where most public services are in-person, paper-based, and often involve multiple steps. Yet, fueled by international rankings and benchmarks, governments are often eager to skip stages in their digital journey. This leads them to attempt, and often fail, to provide transactional digital services, before they can even learn how to offer basic information about these services. The first step in effective transformation should be offering information to users in a simple and accessible manner. Done well, that forms a good foundation for the next step: delivering digital services.
2. Prioritise the things that will make the biggest difference
Remember that public service delivery follows a power law distribution: a small number of services account for the vast majority of transactions with government. Which these services are will vary according to country, level of government, and models of public service delivery. When the time comes to decide where to start, don’t rely on cookie-cutter lists of services to be digitized. Instead, find out which ones are the most used, and will have the greatest impact. Start with the ones that can be delivered faster, and that are most likely to make users’ lives easier.
3. Don’t digitise the mess
The fact that a process exists doesn’t mean it’s a good process. Transformation is an opportunity to radically rethink how things work. We’ve seen examples including, for instance, requiring multiple copies of a single document, or imposing more procedures on women than men to open a business. When there is inefficiency in a service, map the bottlenecks and think about how to streamline the process. Don’t just digitise the bottlenecks, they will keep on being an expensive problem. Resist the temptation to digitise things that should not exist in the first place. …(More)”.
Open access book by Matthias C. Kettemann: “There is order on the internet, but how has this order emerged and what challenges will threaten and shape its future? This study shows how a legitimate order of norms has emerged online, through both national and international legal systems. It establishes the emergence of a normative order of the internet, an order which explains and justifies processes of online rule and regulation. This order integrates norms at three different levels (regional, national, international), of two types (privately and publicly authored), and of different character (from ius cogens to technical standards).
Matthias C. Kettemann assesses their internal coherence, their consonance with other order norms and their consistency with the order’s finality. The normative order of the internet is based on and produces a liquefied system characterized by self-learning normativity. In light of the importance of the socio-communicative online space, this is a book for anyone interested in understanding the contemporary development of the internet….(More)”.
Book edited by edited by Linnet Taylor, Aaron Martin, Gargi Sharma and Shazade Jameson: “In early 2020, as the COVID-19 pandemic swept the world and states of emergency were declared by one country after another, the global technology sector—already equipped with unprecedented wealth, power, and influence—mobilised to seize the opportunity. This collection is an account of what happened next and captures the emergent conflicts and responses around the world. The essays provide a global perspective on the implications of these developments for justice: they make it possible to compare how the intersection of state and corporate power—and the way that power is targeted and exercised—confronts, and invites resistance from, civil society in countries worldwide.
This edited volume captures the technological response to the pandemic in 33 countries, accompanied by nine thematic reflections, and reflects the unfolding of the first wave of the pandemic.
This book can be read as a guide to the landscape of technologies deployed during the pandemic and also be used to compare individual country strategies. It will prove useful as a tool for teaching and learning in various disciplines and as a reference point for activists and analysts interested in issues of data justice.
The essays interrogate these technologies and the political, legal, and regulatory structures that determine how they are applied. In doing so,the book exposes the workings of state technological power to critical assessment and contestation….(More)”