New UN resolution on the right to privacy in the digital age: crucial and timely


Deborah Brown at the Internet Policy Review: “The rapid pace of technological development enables individuals all over the world to use new information and communications technologies (ICTs) to improve their lives. At the same time, technology is enhancing the capacity of governments, companies and individuals to undertake surveillance, interception and data collection, which may violate or abuse human rights, in particular the right to privacy. In this context, the UN General Assembly’s Third Committee adoption on 21 November of a new resolution on the right to privacy in the digital age comes as timely and crucial for protecting the right to privacy in light of new challenges.

As with previous UN resolutions on this topic, the resolution adopted on 21 November 2016 recognises the importance of respecting international commitments in relation to the right to privacy. It underscores that any legitimate concerns states may have with regard to their security can and should be addressed in a manner consistent with obligations under international human rights law.

Recognising that more and more personal data is being collected, processed, and shared, this year’s resolution expresses concern about the sale or multiple re-sales of personal data, which often happens without the individual’s free, explicit and informed consent. It calls for the strengthening of prevention of and protection against such violations, and calls on states to develop preventative measures, sanctions, and remedies.

This year, the resolution more explicitly acknowledges the role of the private sector. It calls on states to put in place (or maintain) effective sanctions and remedies to prevent the private sector from committing violations and abuses of the right to privacy. This is in line with states’ obligations under the UN Guiding Principles on Business and Human Rights, which require states to protect against abuses by businesses within their territories or jurisdictions. The resolution specifically calls on states to refrain from requiring companies to take steps that interfere with the right to privacy in an arbitrary or unlawful way. With respect to companies, it recalls the responsibility of the private sector to respect human rights, and specifically calls on them to inform users about company policies that may impact their right to privacy….(More)”

New Institute Pushes the Boundaries of Big Data


Press Release: “Each year thousands of genomes are sequenced, millions of neuronal activity traces are recorded, and light from hundreds of millions of galaxies is captured by our newest telescopes, all creating datasets of staggering size. These complex datasets are then stored for analysis.

Ongoing analysis of these information streams has illuminated a problem, however: Scientists’ standard methodologies are inadequate to the task of analyzing massive quantities of data. The development of new methods and software to learn from data and to model — at sufficient resolution — the complex processes they reflect is now a pressing concern in the scientific community.

To address these challenges, the Simons Foundation has launched a substantial new internal research group called the Flatiron Institute (FI). The FI is the first multidisciplinary institute focused entirely on computation. It is also the first center of its kind to be wholly supported by private philanthropy, providing a permanent home for up to 250 scientists and collaborating expert programmers all working together to create, deploy and support new state-of-the-art computational methods. Few existing institutions support the combination of scientists and programmers, instead leaving programming to relatively impermanent graduate students and postdoctoral fellows, and none have done so at the scale of the Flatiron Institute or with such a broad scope, at a single location.

The institute will hold conferences and meetings and serve as a focal point for computational science around the world….(More)”.

Governance and Service Delivery: Practical Applications of Social Accountability Across Sectors


Book edited by Derick W. Brinkerhoff, Jana C. Hertz, and Anna Wetterberg: “…Historically, donors and academics have sought to clarify what makes sectoral projects effective and sustainable contributors to development. Among the key factors identified have been (1) the role and capabilities of the state and (2) the relationships between the state and citizens, phenomena often lumped together under the broad rubric of “governance.” Given the importance of a functioning state and positive interactions with citizens, donors have treated governance as a sector in its own right, with projects ranging from public sector management reform, to civil society strengthening, to democratization (Brinkerhoff, 2008). The link between governance and sectoral service delivery was highlighted in the World Bank’s 2004 World Development Report, which focused on accountability structures and processes (World Bank, 2004).

Since then, sectoral specialists’ awareness that governance interventions can contribute to service delivery improvements has increased substantially, and there is growing recognition that both technical and governance elements are necessary facets of strengthening public services. However, expanded awareness has not reliably translated into effective integration of governance into sectoral programs and projects in, for example, health, education, water, agriculture, or community development. The bureaucratic realities of donor programming offer a partial explanation…. Beyond bureaucratic barriers, though, lie ongoing gaps in practical knowledge of how best to combine attention to governance with sector-specific technical investments. What interventions make sense, and what results can reasonably be expected? What conditions support or limit both improved governance and better service delivery? How can citizens interact with public officials and service providers to express their needs, improve services, and increase responsiveness? Various models and compilations of best practices have been developed, but debates remain, and answers to these questions are far from settled. This volume investigates these questions and contributes to building understanding that will enhance both knowledge and practice. In this book, we examine six recent projects, funded mostly by the United States Agency for International Development and implemented by RTI International, that pursued several different paths to engaging citizens, public officials, and service providers on issues related to accountability and sectoral services…(More)”

What’s wrong with big data?


James Bridle in the New Humanist: “In a 2008 article in Wired magazine entitled “The End of Theory”, Chris Anderson argued that the vast amounts of data now available to researchers made the traditional scientific process obsolete. No longer would they need to build models of the world and test them against sampled data. Instead, the complexities of huge and totalising datasets would be processed by immense computing clusters to produce truth itself: “With enough data, the numbers speak for themselves.” As an example, Anderson cited Google’s translation algorithms which, with no knowledge of the underlying structures of languages, were capable of inferring the relationship between them using extensive corpora of translated texts. He extended this approach to genomics, neurology and physics, where scientists are increasingly turning to massive computation to make sense of the volumes of information they have gathered about complex systems. In the age of big data, he argued, “Correlation is enough. We can stop looking for models.”

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

“Big data” is not merely a business buzzword, but a way of seeing the world. Driven by technology, markets and politics, it has come to determine much of our thinking, but it is flawed and dangerous. It runs counter to our actual findings when we employ such technologies honestly and with the full understanding of their workings and capabilities. This over-reliance on data, which I call “quantified thinking”, has come to undermine our ability to reason meaningfully about the world, and its effects can be seen across multiple domains.

The assertion is hardly new. Writing in the Dialectic of Enlightenment in 1947, Theodor Adorno and Max Horkheimer decried “the present triumph of the factual mentality” – the predecessor to quantified thinking – and succinctly analysed the big data fallacy, set out by Anderson above. “It does not work by images or concepts, by the fortunate insights, but refers to method, the exploitation of others’ work, and capital … What men want to learn from nature is how to use it in order wholly to dominate it and other men. That is the only aim.” What is different in our own time is that we have built a world-spanning network of communication and computation to test this assertion. While it occasionally engenders entirely new forms of behaviour and interaction, the network most often shows to us with startling clarity the relationships and tendencies which have been latent or occluded until now. In the face of the increased standardisation of knowledge, it becomes harder and harder to argue against quantified thinking, because the advances of technology have been conjoined with the scientific method and social progress. But as I hope to show, technology ultimately reveals its limitations….

“Eroom’s law” – Moore’s law backwards – was recently formulated to describe a problem in pharmacology. Drug discovery has been getting more expensive. Since the 1950s the number of drugs approved for use in human patients per billion US dollars spent on research and development has halved every nine years. This problem has long perplexed researchers. According to the principles of technological growth, the trend should be in the opposite direction. In a 2012 paper in Nature entitled “Diagnosing the decline in pharmaceutical R&D efficiency” the authors propose and investigate several possible causes for this. They begin with social and physical influences, such as increased regulation, increased expectations and the exhaustion of easy targets (the “low hanging fruit” problem). Each of these are – with qualifications – disposed of, leaving open the question of the discovery process itself….(More)

We All Need Help: “Big Data” and the Mismeasure of Public Administration


Essay by Stephane Lavertu in Public Administration Review: “Rapid advances in our ability to collect, analyze, and disseminate information are transforming public administration. This “big data” revolution presents opportunities for improving the management of public programs, but it also entails some risks. In addition to potentially magnifying well-known problems with public sector performance management—particularly the problem of goal displacement—the widespread dissemination of administrative data and performance information increasingly enables external political actors to peer into and evaluate the administration of public programs. The latter trend is consequential because external actors may have little sense of the validity of performance metrics and little understanding of the policy priorities they capture. The author illustrates these potential problems using recent research on U.S. primary and secondary education and suggests that public administration scholars could help improve governance in the data-rich future by informing the development and dissemination of organizational report cards that better capture the value that public agencies deliver….(More)”.

Understanding the four types of AI, from reactive robots to self-aware beings


 at The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”

The crowdsourcing movement to improve African maps


Chris Stein for Quartz: “In map after map after map, many African countries appear as a void, marked with a color that signifies not a percentage or a policy but merely offers an explanation: “no data available.”

Where numbers or cartography has left African countries behind, developers are stepping in with open-source tools that allow anyone from academics to your everyday smartphone user to improve maps of the continent.

One such project is Missing Maps, which invites people to use satellite imagery on mapping platform OpenStreetMap to fill out roads, buildings and other features in various parts of Africa that lack these markers. Active projects on åMissing Maps include everything from mapping houses in Malawi to marking roads in the Democratic Republic of Congo.Missing Maps co-founder Ivan Gayton said humanitarian organizations could use the refined maps for development projects or to respond to future disasters or disease outbreaks….

In July, Missing Maps launched MapSwipe, a smartphone app that helps whittle down the areas needed for mapping on OpenStreetMap by giving anyone with an iPhone or Android phone the ability to swipe through satellite images and indicate if they contain features like houses, roads or paths. These are then forwarded onto Missing Maps for precise marking of these features….Missing Maps’s approach is similar to that of Mapping Africa, a project developed at Princeton University that pays users to look at satellite images and identify croplands….People who sign up on Amazon’s Mechanical Turk service are given satellite images of random patches of land across Africa and asked to determine if the land is being used for farming.

…One outlet for Mapping Africa’s data could be AfricaMap, a Harvard University project where users can compile data on everything from ethnic groups to mother tongues to slave trade routes and layer it over a map of the continent….(More)”

Service Design Impact Report: Public Sector


SDN: “In our study we have identified five different areas that are relevant for service design: policy making, cultural and organizational change, training and capacity building, citizens engagement and digitization.

Service design is taking a role in “policy creation”. Not only does it bring in-depth insights in the needs and constraints of citizens that help to design policies that really work for citizens – it also enables and facilitates processes of co-creation with different stakeholders. Policies are perceived as a piece of design work that is in a constant development and they are made by people for people.

Service design is also taking a role in the process of cultural and organizational change. It collaborates with other experts in this field in order to enable change by reframing the challenges, by engaging stakeholders in development of scenarios of futures that do not yet exist and by prototyping envisioned scenarios. These processes change the role of public servants from experts to partners. It is no longer the public service that is doing something for the citizens but doing it with them.

This new way of thinking and working demands not only a change in mindset, but also in the way of doing things. Service design helps to build these new capacities. Very often it is a combination of teaching and learning by doing, in the process of capacity building small service design projects can be approached that create a sense of what service design can do and how to do it.

In this sentence service design works along with existing practices of citizens engagement and enriches them by the design approach. People are no longer victims of circumstances but creators of environments.

Very often we find that the digitalization of public services is the entrance door for designers. So enabling designers to expand their capacities and showcase how service design does not only polish the bits and bytes but really changes the way we live and work….(Full report)”

A Practical Guide for Harnessing the Power of Data


How does it do that? In a word: data.

Using a series of surveys and evaluations, Repair learned that once people participate in two volunteer opportunities, they’re more likely to continue volunteering regularly. Repair has used that and other findings to inform its operations and strategy, and to accelerate its work to encourage individuals to make an enduring commitment to public service.

Many purpose-driven organizations like Repair the World are committing more brainpower, time, and money to gathering data, and nonprofit and foundation professionals alike are recognizing the importance of that effort.

And yet there is a difference between just having data and using it well. Recent surveys have found that 94 percent of nonprofit professionals felt they were not using data effectively, and that 75 percent of foundation professionals felt that evaluations conducted by and submitted to grant makers did not provide any meaningful insights.

To remedy this, the Charles and Lynn Schusterman Family Foundation (one of Repair the World’s donors) developed the Data Playbook, a new tool to help more organizations harness the power of data to make smarter decisions, gain insights, and accelerate progress….

In the purpose-driven sector, our work is critically important for shaping lives and strengthening communities. Now is the time for all of us to commit to using the data at our fingertips to advance the broad range of causes we work on — education, health care, leadership development, social-justice work, and much more…

We are in this together. Let’s get started. (More)”

Tackling Corruption with People-Powered Data


Sandra Prüfer at Mastercard Center for Inclusive Growth: “Informal fees plague India’s “free” maternal health services. In Nigeria, village households don’t receive the clean cookstoves their government paid for. Around the world, corruption – coupled with the inability to find and share information about it – stymies development in low-income communities.

Now, digital transparency platforms – supplemented with features illiterate and rural populations can use – make it possible for traditionally excluded groups to make their voices heard and access tools they need to grow.

Mapping Corruption Hot Spots in India

One of the problems surrounding access to information is the lack of reliable information in the first place: a popular method to create knowledge is crowdsourcing and enlisting the public to monitor and report on certain issues.

The Mera Swasthya Meri Aawaz platform, which means “Our Health, Our Voice”, is an interactive map in Uttar Pradesh launched by the Indian non-profit organization SAHAYOG. It enables women to anonymously report illicit fees charged for services at maternal health clinics using their mobile phones.

To reduce infant mortality and deaths in childbirth, the Indian government provides free prenatal care and cash incentives to use maternal health clinics, but many charge illegal fees anyway – cutting mothers off from lifesaving healthcare and inhibiting communities’ growth. An estimated 45,000 women in India died in 2015 from complications of pregnancy and childbirth – one of the highest rates of any country in the world; low-income women are disproportionately affected….“Documenting illegal payment demands in real time and aggregating the data online increased governmental willingness to listen,” Sandhya says. “Because the data is linked to technology, its authenticity is not questioned.”

Following the Money in Nigeria

In Nigeria, Connected Development (CODE) also champions open data to combat corruption in infrastructure building, health and education projects. Its mission is to improve access to information and empower local communities to share data that can expose financial irregularities. Since 2012, the Abuja-based watchdog group has investigated twelve capital projects, successfully pressuring the government to release funds including $5.3 million to treat 1,500 lead-poisoned children.

“People activate us: if they know about any project that is supposed to be in their community, but isn’t, they tell us they want us to follow the money – and we’ll take it from there,” says CODE co-founder Oludotun Babayemi.

Users alert the watchdog group directly through its webpage, which publishes open-source data about development projects that are supposed to be happening, based on reports from freedom of information requests to Nigeria’s federal minister of environment, World Bank data and government press releases.

Last year, as part of their #WomenCookstoves reporting campaign, CODE revealed an apparent scam by tracking a $49.8 million government project that was supposed to purchase 750,000 clean cookstoves for rural women. Smoke inhalation diseases disproportionately affect women who spend time cooking over wood fires; according to the World Health Organization, almost 100,000 people die yearly in Nigeria from inhaling wood smoke, the country’s third biggest killer after malaria and AIDS.

“After three months, we found out that only 15 percent of the $48 million was given to the contractor – meaning there were only 45,000 cook stoves out of 750,000 in the county,” Babayemi says….(More)”