Why we need responsible data for children


Andrew Young and Stefaan Verhulst at The Conversation: “…Without question, the increased use of data poses unique risks for and responsibilities to children. While practitioners may have well-intended purposes to leverage data for and about children, the data systems used are often designed with (consenting) adults in mind without a focus on the unique needs and vulnerabilities of children. This can lead to the collection of inaccurate and unreliable data as well as the inappropriate and potentially harmful use of data for and about children….

Research undertaken in the context of the RD4C initiative uncovered the following trends and realities. These issues make clear why we need a dedicated data responsibility approach for children.

  • Today’s children are the first generation growing up at a time of rapid datafication where almost all aspects of their lives, both on and off-line, are turned into data points. An entire generation of young people is being datafied – often starting even before birth. Every year the average child will have more data collected about them in their lifetime than would a similar child born any year prior. The potential uses of such large volumes of data and the impact on children’s lives are unpredictable, and could potentially be used against them.
  • Children typically do not have full agency to make decisions about their participation in programs or services which may generate and record personal data. Children may also lack the understanding to assess a decision’s purported risks and benefits. Privacy terms and conditions are often barely understood by educated adults, let alone children. As a result, there is a higher duty of care for children’s data.
  • Disaggregating data according to socio-demographic characteristics can improve service delivery and assist with policy development. However, it also creates risks for group privacy. Children can be identified, exposing them to possible harms. Disaggregated data for groups such as child-headed households and children experiencing gender-based violence can put vulnerable communities and children at risk. Data about children’s location itself can be risky, especially if they have some additional vulnerability that could expose them to harm.
  • Mishandling data can cause children to lose trust in institutions that deliver essential services including vaccines, medicine, and nutrition supplies. For organizations dealing with child well-being, these retreats can have severe consequences. Distrust can cause families and children to refuse health, education, child protection and other public services. Such privacy protective behavior can impact children throughout the course of their lifetime, and potentially exacerbate existing inequities and vulnerabilities.
  • As volumes of collected and stored data increase, obligations and protections traditionally put in place for children may be difficult or impossible to uphold. The interests of children are not always prioritized when organizations define their legitimate interest to access or share personal information of children. The immediate benefit of a service provided does not always justify the risk or harm that might be caused by it in the future. Data analysis may be undertaken by people who do not have expertise in the area of child rights, as opposed to traditional research where practitioners are specifically educated in child subject research. Similarly, service providers collecting children’s data are not always specially trained to handle it, as international standards recommend.
  • Recent events around the world reveal the promise and pitfalls of algorithmic decision-making. While it can expedite certain processes, algorithms and their inferences can possess biases that can have adverse effects on people, for example those seeking medical care and attempting to secure jobs. The danger posed by algorithmic bias is especially pronounced for children and other vulnerable populations. These groups often lack the awareness or resources necessary to respond to instances of bias or to rectify any misconceptions or inaccuracies in their data.
  • Many of the children served by child welfare organizations have suffered trauma. Whether physical, social, emotional in nature, repeatedly making children register for services or provide confidential personal information can amount to revictimization – re-exposing them to traumas or instigating unwarranted feelings of shame and guilt.

These trends and realities make clear the need for new approaches for maximizing the value of data to improve children’s lives, while mitigating the risks posed by our increasingly datafied society….(More)”.

The world after coronavirus


Yuval Noah Harari at the Financial Times: “Humankind is now facing a global crisis. Perhaps the biggest crisis of our generation. The decisions people and governments take in the next few weeks will probably shape the world for years to come. They will shape not just our healthcare systems but also our economy, politics and culture. We must act quickly and decisively. We should also take into account the long-term consequences of our actions.

When choosing between alternatives, we should ask ourselves not only how to overcome the immediate threat, but also what kind of world we will inhabit once the storm passes. Yes, the storm will pass, humankind will survive, most of us will still be alive — but we will inhabit a different world.  Many short-term emergency measures will become a fixture of life. That is the nature of emergencies. They fast-forward historical processes.

Decisions that in normal times could take years of deliberation are passed in a matter of hours. Immature and even dangerous technologies are pressed into service, because the risks of doing nothing are bigger. Entire countries serve as guinea-pigs in large-scale social experiments. What happens when everybody works from home and communicates only at a distance? What happens when entire schools and universities go online? In normal times, governments, businesses and educational boards would never agree to conduct such experiments. But these aren’t normal times. 

In this time of crisis, we face two particularly important choices. The first is between totalitarian surveillance and citizen empowerment. The second is between nationalist isolation and global solidarity. 

Under-the-skin surveillance

In order to stop the epidemic, entire populations need to comply with certain guidelines. There are two main ways of achieving this. One method is for the government to monitor people, and punish those who break the rules. Today, for the first time in human history, technology makes it possible to monitor everyone all the time. Fifty years ago, the KGB couldn’t follow 240m Soviet citizens 24 hours a day, nor could the KGB hope to effectively process all the information gathered. The KGB relied on human agents and analysts, and it just couldn’t place a human agent to follow every citizen. But now governments can rely on ubiquitous sensors and powerful algorithms instead of flesh-and-blood spooks. 

In their battle against the coronavirus epidemic several governments have already deployed the new surveillance tools. The most notable case is China. By closely monitoring people’s smartphones, making use of hundreds of millions of face-recognising cameras, and obliging people to check and report their body temperature and medical condition, the Chinese authorities can not only quickly identify suspected coronavirus carriers, but also track their movements and identify anyone they came into contact with. A range of mobile apps warn citizens about their proximity to infected patients…

If I could track my own medical condition 24 hours a day, I would learn not only whether I have become a health hazard to other people, but also which habits contribute to my health. And if I could access and analyse reliable statistics on the spread of coronavirus, I would be able to judge whether the government is telling me the truth and whether it is adopting the right policies to combat the epidemic. Whenever people talk about surveillance, remember that the same surveillance technology can usually be used not only by governments to monitor individuals — but also by individuals to monitor governments. 

The coronavirus epidemic is thus a major test of citizenship….(More)”.

Beyond a Human Rights-based approach to AI Governance: Promise, Pitfalls, Plea


Paper by Nathalie A. Smuha: “This paper discusses the establishment of a governance framework to secure the development and deployment of “good AI”, and describes the quest for a morally objective compass to steer it. Asserting that human rights can provide such compass, this paper first examines what a human rights-based approach to AI governance entails, and sets out the promise it propagates. Subsequently, it examines the pitfalls associated with human rights, particularly focusing on the criticism that these rights may be too Western, too individualistic, too narrow in scope and too abstract to form the basis of sound AI governance. After rebutting these reproaches, a plea is made to move beyond the calls for a human rights-based approach, and start taking the necessary steps to attain its realisation. It is argued that, without elucidating the applicability and enforceability of human rights in the context of AI; adopting legal rules that concretise those rights where appropriate; enhancing existing enforcement mechanisms; and securing an underlying societal infrastructure that enables human rights in the first place, any human rights-based governance framework for AI risks falling short of its purpose….(More)”.

World Justice Project (WJP) Rule of Law Index®


Interactive Overview: “The World Justice Project (WJP) Rule of Law Index® is the world’s leading source for original, independent data on the rule of law. Now covering 128 countries and jurisdictions, the Index relies on national surveys of more than 130,000 households and 4,000 legal practitioners and experts to measure how the rule of law is experienced and perceived around the world.

Effective rule of law reduces corruption, combats poverty and disease, and protects people from injustices large and small. It is the foundation for communities of justice, opportunity, and peace—underpinning development, accountable government, and respect for fundamental rights.

Learn more about the rule of law and explore the full WJP Rule of Law Index 2020 report, including PDF report download, data insights, methodology, and more at the Index report resources page….(More)”

CARE Principles for Indigenous Data Governance


The Global Indigenous Data Alliance: “The current movement toward open data and open science does not fully engage with Indigenous Peoples rights and interests. Existing principles within the open data movement (e.g. FAIR: findable, accessible, interoperable, reusable) primarily focus on characteristics of data that will facilitate increased data sharing among entities while ignoring power differentials and historical contexts. The emphasis on greater data sharing alone creates a tension for Indigenous Peoples who are also asserting greater control over the application and use of Indigenous data and Indigenous Knowledge for collective benefit.

This includes the right to create value from Indigenous data in ways that are grounded in Indigenous worldviews and realise opportunities within the knowledge economy. The CARE Principles for Indigenous Data Governance are people and purpose-oriented, reflecting the crucial role of data in advancing Indigenous innovation and self-determination. These principles complement the existing FAIR principles encouraging open and other data movements to consider both people and purpose in their advocacy and pursuits….(More)”.

Policymaking in an Infomocracy


An interview with Malka Older: “…Nisa: There’s a line in your first book, “Democracy is of limited usefulness when there are no good choices, or when all the information access in the world can’t make people use it.” So imagine this world you’ve imagined has a much higher demand for free and accurate information access than we have now, in exchange for a fairly high amount of state surveillance. I’m curious what else we give up when we allow that amount of surveillance into our communities and whether that trade-off is necessary.

Malka: The amount of surveillance in the books is a very gentle extrapolation from where we are now. I don’t know if they need to be that connected but I do feel like privacy is a very relative concept. The way that we think of privacy now is very different than the way that it’s been thought of in the past, or the way it’s thought of in different places, and it’s very hard to put that back in the box. I was thinking more in terms of, since we are giving up our privacy anyway, what would I like to see done with all this information? Most of the types of surveillance that I mentioned are already very much in place. It’s hard to walk down the street without seeing surveillance cameras — they’re in private businesses, outside of apartment buildings, in lobbies, and buses and trains and pretty much everywhere.  We already know that whatever we do online is recorded and tracked in some way. If we have smartphones—which I don’t, I’m trying to resist, although it’s getting harder and harder—pretty much all of our movements are being tracked that way. The difference from the book is that the current situation of surveillance is very fragmented, and a combination of private sector and public sector, as opposed to one monolithic organization. Although, it’s not clear how different it really is from our present when governments are able to subpoena information from the private sector. The other part is that we give away a lot of this information, if not all of it, whenever we accept the terms of service agreements. We’re basically saying, in exchange for having this cool phone, I will let you use my data. But we’re learning that companies are often going far beyond what we legally agreed to, and even what we legally agree to is done in such convoluted terms and there’s an imbalance of information to begin with. That’s really problematic. Rather than thinking in terms of privacy as a kind of absolute or in terms of surveillance, I tend to think more about who owns the data, who has access to the data. The real problem is not just that there are cameras everywhere, but that we don’t know who is watching those cameras or who is able to access those cameras at any given time. Similarly, the fact that all of our online data is being recorded is not necessarily a huge problem, except when we have no way of knowing what the data is contributing to when it’s amalgamated and no recourse or control over how it’s eventually used. All this data that we create in our online trails being in the hands of a corporation that does not need to share it or reveal it, and is using it to make money, or all of that data being available to everybody or held under some sort of very clear and equitable terms where we have much more choice about what’s it’s used for and where we could access our own data. For me, it’s very much about the power structures involved….(More)”.

How Philanthropy Can Help Lead on Data Justice


Louise Lief at Stanford Social Innovation Review: “Today, data governs almost every aspect of our lives, shaping the opportunities we have, how we perceive reality and understand problems, and even what we believe to be possible. Philanthropy is particularly data driven, relying on it to inform decision-making, define problems, and measure impact. But what happens when data design and collection methods are flawed, lack context, or contain critical omissions and misdirected questions? With bad data, data-driven strategies can misdiagnose problems and worsen inequities with interventions that don’t reflect what is needed.

Data justice begins by asking who controls the narrative. Who decides what data is collected and for which purpose? Who interprets what it means for a community? Who governs it? In recent years, affected communities, social justice philanthropists, and academics have all begun looking deeper into the relationship between data and social justice in our increasingly data-driven world. But philanthropy can play a game-changing role in developing practices of data justice to more accurately reflect the lived experience of communities being studied. Simply incorporating data justice principles into everyday foundation practice—and requiring it of grantees—would be transformative: It would not only revitalize research, strengthen communities, influence policy, and accelerate social change, it would also help address deficiencies in current government data sets.

When Data Is Flawed

Some of the most pioneering work on data justice has been done by Native American communities, who have suffered more than most from problems with bad data. A 2017 analysis of American Indian data challenges—funded by the W.K. Kellogg Foundation and the Morris K. Udall and Stewart L. Udall Foundation—documented how much data on Native American communities is of poor quality, inaccurate, inadequate, inconsistent, irrelevant, and/or inaccessible. The National Congress of American Indians even described American Native communities as “The Asterisk Nation,” because in many government data sets they are represented only by an asterisk denoting sampling errors instead of data points.

Where it concerns Native Americans, data is often not standardized and different government databases identify tribal members at least seven different ways using different criteria; federal and state statistics often misclassify race and ethnicity; and some data collection methods don’t allow tribes to count tribal citizens living off the reservation. For over a decade the Department of the Interior’s Bureau of Indian Affairs has struggled to capture the data it needs for a crucial labor force report it is legally required to produce; methodology errors and reporting problems have been so extensive that at times it prevented the report from even being published. But when the Department of the Interior changed several reporting requirements in 2014 and combined data submitted by tribes with US Census data, it only compounded the problem, making historical comparisons more difficult. Moreover, Native Americans have charged that the Census Bureau significantly undercounts both the American Indian population and key indicators like joblessness….(More)”.

Smarter government or data-driven disaster: the algorithms helping control local communities


Release by MuckRock: “What is the chance you, or your neighbor, will commit a crime? Should the government change a child’s bus route? Add more police to a neighborhood or take some away?

Every day government decisions from bus routes to policing used to be based on limited information and human judgment. Governments now use the ability to collect and analyze hundreds of data points everyday to automate many of their decisions.

Does handing government decisions over to algorithms save time and money? Can algorithms be fairer or less biased than human decision making? Do they make us safer? Automation and artificial intelligence could improve the notorious inefficiencies of government, and it could exacerbate existing errors in the data being used to power it.

MuckRock and the Rutgers Institute for Information Policy & Law (RIIPL) have compiled a collection of algorithms used in communities across the country to automate government decision-making.

Go right to the database.

We have also compiled policies and other guiding documents local governments use to make room for the future use of algorithms. You can find those as a project on DocumentCloud.

View policies on smart cities and technologies

These collections are a living resource and attempt to communally collect records and known instances of automated decision making in government….(More)”.

An Algorithm That Grants Freedom, or Takes It Away


Cade Metz and Adam Satariano at The New York Times: “…In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Nearly every state in America has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries.

As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

They are angered by a growing dependence on automated systems that are taking humans and transparency out of the process. It is often not clear how the systems are making their decisions. Is gender a factor? Age? ZIP code? It’s hard to say, since many states and countries have few rules requiring that algorithm-makers disclose their formulas.

They also worry that the biases — involving race, class and geography — of the people who create the algorithms are being baked into these systems, as ProPublica has reported. In San Jose, Calif., where an algorithm is used during arraignment hearings, an organization called Silicon Valley De-Bug interviews the family of each defendant, takes this personal information to each hearing and shares it with defenders as a kind of counterbalance to algorithms.

Two community organizers, the Media Mobilizing Project in Philadelphia and MediaJustice in Oakland, Calif., recently compiled a nationwide database of prediction algorithms. And Community Justice Exchange, a national organization that supports community organizers, is distributing a 50-page guide that advises organizers on how to confront the use of algorithms.

The algorithms are supposed to reduce the burden on understaffed agencies, cut government costs and — ideally — remove human bias. Opponents say governments haven’t shown much interest in learning what it means to take humans out of the decision making. A recent United Nations report warned that governments risked “stumbling zombie-like into a digital-welfare dystopia.”…(More)”.

If China valued free speech, there would be no coronavirus crisis


Verna Yu in The Guardian: “…Despite the flourishing of social media, information is more tightly controlled in China than ever. In 2013, an internal Communist party edict known as Document No 9 ordered cadres to tackle seven supposedly subversive influences on society. These included western-inspired notions of press freedom, “universal values” of human rights, civil rights and civic participation. Even within the Communist party, cadres are threatened with disciplinary action for expressing opinions that differ from the leadership.

Compared with 17 years ago, Chinese citizens enjoy even fewer rights of speech and expression. A few days after 34-year-old Li posted a note in his medical school alumni social media group on 30 December, stating that seven workers from a local live-animal market had been diagnosed with an illness similar to Sars and were quarantined in his hospital, he was summoned by police. He was made to sign a humiliating statement saying he understood if he “stayed stubborn and failed to repent and continue illegal activities, (he) will be disciplined by the law”….

Unless Chinese citizens’ freedom of speech and other basic rights are respected, such crises will only happen again. With a more globalised world, the magnitude may become even greater – the death toll from the coronavirus outbreak is already comparable to the total Sars death toll.

Human rights in China may appear to have little to do with the rest of the world but as we have seen in this crisis, disaster could occur when China thwarts the freedoms of its citizens. Surely it is time the international community takes this issue more seriously….(More)”.