An Algorithm That Grants Freedom, or Takes It Away


Cade Metz and Adam Satariano at The New York Times: “…In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Nearly every state in America has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries.

As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

They are angered by a growing dependence on automated systems that are taking humans and transparency out of the process. It is often not clear how the systems are making their decisions. Is gender a factor? Age? ZIP code? It’s hard to say, since many states and countries have few rules requiring that algorithm-makers disclose their formulas.

They also worry that the biases — involving race, class and geography — of the people who create the algorithms are being baked into these systems, as ProPublica has reported. In San Jose, Calif., where an algorithm is used during arraignment hearings, an organization called Silicon Valley De-Bug interviews the family of each defendant, takes this personal information to each hearing and shares it with defenders as a kind of counterbalance to algorithms.

Two community organizers, the Media Mobilizing Project in Philadelphia and MediaJustice in Oakland, Calif., recently compiled a nationwide database of prediction algorithms. And Community Justice Exchange, a national organization that supports community organizers, is distributing a 50-page guide that advises organizers on how to confront the use of algorithms.

The algorithms are supposed to reduce the burden on understaffed agencies, cut government costs and — ideally — remove human bias. Opponents say governments haven’t shown much interest in learning what it means to take humans out of the decision making. A recent United Nations report warned that governments risked “stumbling zombie-like into a digital-welfare dystopia.”…(More)”.

If China valued free speech, there would be no coronavirus crisis


Verna Yu in The Guardian: “…Despite the flourishing of social media, information is more tightly controlled in China than ever. In 2013, an internal Communist party edict known as Document No 9 ordered cadres to tackle seven supposedly subversive influences on society. These included western-inspired notions of press freedom, “universal values” of human rights, civil rights and civic participation. Even within the Communist party, cadres are threatened with disciplinary action for expressing opinions that differ from the leadership.

Compared with 17 years ago, Chinese citizens enjoy even fewer rights of speech and expression. A few days after 34-year-old Li posted a note in his medical school alumni social media group on 30 December, stating that seven workers from a local live-animal market had been diagnosed with an illness similar to Sars and were quarantined in his hospital, he was summoned by police. He was made to sign a humiliating statement saying he understood if he “stayed stubborn and failed to repent and continue illegal activities, (he) will be disciplined by the law”….

Unless Chinese citizens’ freedom of speech and other basic rights are respected, such crises will only happen again. With a more globalised world, the magnitude may become even greater – the death toll from the coronavirus outbreak is already comparable to the total Sars death toll.

Human rights in China may appear to have little to do with the rest of the world but as we have seen in this crisis, disaster could occur when China thwarts the freedoms of its citizens. Surely it is time the international community takes this issue more seriously….(More)”.

Why the Global South should nationalise its data


Ulises Ali Mejias at AlJazeera: “The recent coup in Bolivia reminds us that poor countries rich in resources continue to be plagued by the legacy of colonialism. Anything that stands in the way of a foreign corporation’s ability to extract cheap resources must be removed.

Today, apart from minerals and fossil fuels, corporations are after another precious resource: Personal data. As with natural resources, data too has become the target of extractive corporate practices.

As sociologist Nick Couldry and I argue in our book, The Costs of Connection: How Data is Colonizing Human Life and Appropriating It for Capitalism, there is a new form of colonialism emerging in the world: data colonialism. By this, we mean a new resource-grab whereby human life itself has become a direct input into economic production in the form of extracted data.

We acknowledge that this term is controversial, given the extreme physical violence and structures of racism that historical colonialism employed. However, our point is not to say that data colonialism is the same as historical colonialism, but rather to suggest that it shares the same core function: extraction, exploitation, and dispossession.

Like classical colonialism, data colonialism violently reconfigures human relations to economic production. Things like land, water, and other natural resources were valued by native people in the precolonial era, but not in the same way that colonisers (and later, capitalists) came to value them: as private property. Likewise, we are experiencing a situation in which things that were once primarily outside the economic realm – things like our most intimate social interactions with friends and family, or our medical records – have now been commodified and made part of an economic cycle of data extraction that benefits a few corporations.

So what could countries in the Global South do to avoid the dangers of data colonialism?…(More)”.

Human Rights in the Age of Platforms


Book by Rikke Frank Jørgensen: “Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.

The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation…(More)”.

Steering AI and Advanced ICTs for Knowledge Societies: a Rights, Openness, Access, and Multi-stakeholder Perspective


Report by Unesco: “Artificial Intelligence (AI) is increasingly becoming the veiled decision-maker of our times. The diverse technical applications loosely associated with this label drive more and more of our lives. They scan billions of web pages, digital trails and sensor-derived data within micro-seconds, using algorithms to prepare and produce significant decisions.

AI and its constitutive elements of data, algorithms, hardware, connectivity and storage exponentially increase the power of Information and Communications Technology (ICT). This is a major opportunity for Sustainable Development, although risks also need to be addressed.

It should be noted that the development of AI technology is part of the wider ecosystem of Internet and other advanced ICTs including big data, Internet of Things, blockchains, etc. To assess AI and other advanced ICTs’ benefits and challenges – particularly for communications and information – a useful approach is UNESCO’s Internet Universality ROAM principles.These principles urge that digital development be aligned with human Rights, Openness, Accessibility and Multi-stakeholder governance to guide the ensemble of values, norms, policies, regulations, codes and ethics that govern the development and use of AI….(More)”

Contract for the Web


About: “The Web was designed to bring people together and make knowledge freely available. It has changed the world for good and improved the lives of billions. Yet, many people are still unable to access its benefits and, for others, the Web comes with too many unacceptable costs.

Everyone has a role to play in safeguarding the future of the Web. The Contract for the Web was created by representatives from over 80 organizations, representing governments, companies and civil society, and sets out commitments to guide digital policy agendas. To achieve the Contract’s goals, governments, companies, civil society and individuals must commit to sustained policy development, advocacy, and implementation of the Contract’s text…(More)”.

The Right to Be Seen


Anne-Marie Slaughter and Yuliya Panfil at Project Syndicate: “While much of the developed world is properly worried about myriad privacy outrages at the hands of Big Tech and demanding – and securing – for individuals a “right to be forgotten,” many around the world are posing a very different question: What about the right to be seen?

Just ask the billion people who are locked out of services we take for granted – things like a bank account, a deed to a house, or even a mobile phone account – because they lack identity documents and thus can’t prove who they are. They are effectively invisible as a result of poor data.

The ability to exercise many of our most basic rights and privileges – such as the right to vote, drive, own property, and travel internationally – is determined by large administrative agencies that rely on standardized information to determine who is eligible for what. For example, to obtain a passport it is typically necessary to present a birth certificate. But what if you do not have a birth certificate? To open a bank account requires proof of address. But what if your house doesn’t have an address?

The inability to provide such basic information is a barrier to stability, prosperity, and opportunity. Invisible people are locked out of the formal economy, unable to vote, travel, or access medical and education benefits. It’s not that they are undeserving or unqualified, it’s that they are data poor.

In this context, the rich digital record provided by our smartphones and other sensors could become a powerful tool for good, so long as the risks are acknowledged. These gadgets, which have become central to our social and economic lives, leave a data trail that for many of us is the raw material that fuels what Harvard’s Shoshana Zuboff calls “surveillance capitalism.” Our Google location history shows exactly where we live and work. Our email activity reveals our social networks. Even the way we hold our smartphone can give away early signs of Parkinson’s.

But what if citizens could harness the power of these data for themselves, to become visible to administrative gatekeepers and access the rights and privileges to which they are entitled? Their virtual trail could then be converted into proof of physical facts.

That is beginning to happen. In India, slum dwellers are using smartphone location data to put themselves on city maps for the first time and register for addresses that they can then use to receive mail and register for government IDs. In Tanzania, citizens are using their mobile payment histories to build their credit scores and access more traditional financial services. And in Europe and the United States, Uber drivers are fighting for their rideshare data to advocate for employment benefits….(More)”.

Surveillance giants: how the business model of Google and Facebook threatens human rights


Report by Amnesty International: “Google and Facebook help connect the world and provide crucial services to billions. To participate meaningfully in today’s economy and society, and to realize their human rights, people rely on access to the internet—and to the tools Google and Facebook offer. But Google and Facebook’s platforms come at a systemic cost. The companies’ surveillance-based business model is inherently incompatible with the right to privacy and poses a threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination….(More)”.

Responsible Data for Children


New Site and Report by UNICEF and The GovLab: “RD4C seeks to build awareness regarding the need for special attention to data issues affecting children—especially in this age of changing technology and data linkage; and to engage with governments, communities, and development actors to put the best interests of children and a child rights approach at the center of our data activities. The right data in the right hands at the right time can significantly improve outcomes for children. The challenge is to understand the potential risks and ensure that the collection, analysis and use of data on children does not undermine these benefits.

Drawing upon field-based research and established good practice, RD4C aims to highlight and support best practice data responsibility; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.

Digital human rights are next frontier for fund groups


Siobhan Riding at the Financial Times: “Politicians publicly grilling technology chiefs such as Facebook’s Mark Zuckerberg is all too familiar for investors. “There isn’t a day that goes by where you don’t see one of the tech companies talking to Congress or being highlighted for some kind of controversy,” says Lauren Compere, director of shareholder engagement at Boston Common Asset Management, a $2.4bn fund group that invests heavily in tech stocks.

Fallout from the Cambridge Analytica scandal that engulfed Facebook was a wake-up call for investors such as Boston Common, underlining the damaging social effects of digital technology if left unchecked. “These are the red flags coming up for us again and again,” says Ms Compere.

Digital human rights are fast becoming the latest front in the debate around fund managers’ ethical investments efforts. Fund managers have come under pressure in recent years to divest from companies that can harm human rights — from gun manufacturers or retailers to operators of private prisons. The focus is now switching to the less tangible but equally serious human rights risks lurking in fund managers’ technology holdings. Attention on technology groups began with concerns around data privacy, but emerging focal points are targeted advertising and how companies deal with online extremism.

Following a terrorist attack in New Zealand this year where the shooter posted video footage of the incident online, investors managing assets of more than NZ$90bn (US$57bn) urged Facebook, Twitter and Alphabet, Google’s parent company, to take more action in dealing with violent or extremist content published on their platforms. The Investor Alliance for Human Rights is currently co-ordinating a global engagement effort with Alphabet over the governance of its artificial intelligence technology, data privacy and online extremism.

Investor engagement on the topic of digital human rights is in its infancy. One roadblock for investors has been the difficulty they face in detecting and measuring what the actual risks are. “Most investors do not have a very good understanding of the implications of all of the issues in the digital space and don’t have sufficient research and tools to properly assess them — and that goes for companies too,” said Ms Compere.

One rare resource available is the Ranking Digital Rights Corporate Accountability Index, established in 2015, which rates tech companies based on a range of metrics. The development of such tools gives investors more information on the risk associated with technological advancements, enabling them to hold companies to account when they identify risks and questionable ethics….(More)”.