Innovations in satellite measurements for development


Ran Goldblatt, Trevor Monroe, Sarah Elizabeth Antos, Marco Hernandez at the World Bank Data Blog: “The desire of human beings to “think spatially” to understand how people and objects are organized in space has not changed much since Eratosthenes—the Greek astronomer best known as the “father of Geography”—first used the term “Geographika” around 250 BC. Centuries later, our understanding of economic geography is being propelled forward by new data and new capabilities to rapidly process, analyze and convert these vast data flows into meaningful and near real-time information.

The increasing availability of satellite data has transformed how we use remote sensing analytics to understand, monitor and achieve the 2030 Sustainable Development Goals. As satellite data becomes ever more accessible and frequent, it is now possible not only to better understand how the Earth is changing, but also to utilize these insights to improve decision making, guide policy, deliver services, and promote better-informed governance. Satellites capture many of the physical, economic and social characteristics of Earth, providing a unique asset for developing countries, where reliable socio-economic and demographic data is often not consistently available. Analysis of satellite data was once relegated to researchers with access to costly data or to “super computers”. Today, the increased availability of “free” satellite data, combined with powerful cloud computing and open source analytical tools have democratized data innovation, enabling local governments and agencies to use satellite data to improve sector diagnostics, development indicators, program monitoring and service delivery.

Drivers of innovation in satellite measurements

  • Big (geo)data – Satellites in Global Development are improving every day, creating new opportunities for impact in development. They capture millions of images from Earth in different spatial, spectral and temporal resolutions, generating data in ever increasing volume, variety and velocity.
  • Open Source Open source annotated datasets, the World Bank’s Open Data, and other publicly available resources allow to process and document the data (e.g. Cumuluslabel maker) and perform machine learning analysis using common programming languages such as R or Python.
  • Crowd – crowdsource platforms like MTurkFigure-eight and Tomnod are used to collect and enhance inputs (reference data) to train machines to identify automatically specific objects and land cover on Earth.
  • High Quality Ground Truth –Robust algorithms that analyze the entire planet require diverse training data, and traditional development Microdata for use in machine learning training, validation and calibration, for example, to map urbanization processes.
  • Cloud – cloud computing and data storage capabilities within platforms like AWSAzure and Google Earth Engine provide scalable solutions for storage, management and parallel processing of large volumes of data.

…As petabytes of geo data are being collected, novel methods are developed to convert these data into meaningful information about the nature and pace of change on Earth, for example, the formation of urban landscapes and human settlements, the creation of transportation networks that connect cities or the conversion of natural forests into productive agricultural land. New possibilities emerge for harnessing this data for a better understanding about our changing world….(More)”.

Digital rights as a security objective: New gateways for attacks


Yannic Blaschke at EDRI: “Violations of human rights online, most notably the right to data protection, can pose a real threat to electoral security and societal polarisation. In this series of blogposts, we’ll explain how and why digital rights must be treated as a security objective instead. The second part of the series explains how encroaching on digital rights could create new gateways for attacks against our security.

In the first part of this series, we analysed the failure of the Council of the European Union to connect the obvious dots between ePrivacy and disinformation online, leaving open a security vulnerability through a lack of protection of citizens. However, a failure to act is not the only front on which the EU is potentially weakening our security on- and offline: on the contrary, some of the EU’s more actively pursued digital policies could have unintended, yet serious consequences in the future. Nowhere is this trend more visible than in the recent trust in filtering algorithms, which seem to be the new “censorship machine” that is proposed as a solution for almost everything, from copyright infringements to terrorist content online.

Article 13 of the Copyright Directive proposal and the Terrorist Content Regulation proposal are two examples of the attempt to regulate the online world via algorithms. While having different motivations, both share the logic of outsourcing accountability and enforcement of public rules to private entities who will be the ones deciding about the availability of speech online. They, explicitly or implicitly, advocate for the introduction of technologies that detect and remove certain types of content: upload filters. They empower internet companies to decide which content will stay online, based on their terms of service (and not law). In a nutshell, public institutions are encouraging Google, Facebook and other platform giants to become the judge and the police of the internet. In turn, they undermine the presumption that it should be democratically legitimise states, not private entities, who are tasked with the heavy burden of balancing the right to freedom of expression.

Even more chilling is the outlook of upload filters creating new entry points for forces that seek to influence societal debates in their favour. If algorithms will be the judges of what can or cannot be published, they could become the target of the next wave of election interference campaigns, with attackers instigating them to take down critical or liberal voices to influence debates on the internet. Despite continuous warnings about the misuse of personal data on Facebook, it only took us a few years to arrive at the point of Cambridge Analytica. How long will it take us to arrive at a similar point of election interference through upload filters in online platforms?

If we let this pre-emptive and extra-judicial censorship happen, it would likely result in severe detriments to the freedom of speech and right to information of European citizens, and the free flow of information would, in consequence, be stifled. The societal effects of this could be further aggravated by the introduction of a press publishers right (Article 11 of the Copyright Directive) that is vividly opposed by the academic world, as it will concentrate the power over what appears in the news in ever fewer hands. Especially in Member States where media plurality and independence of bigger outlets from state authorities are no longer guaranteed, a decline in societal resilience to authoritarian tendencies is unfortunately easy to imagine.

We have to be very clear about what machines are good at and what they are bad at: Algorithms are incredibly well suited to detect patterns and trends, but cannot and will not be able perform the delicate act of balancing our rights and freedoms in accordance with the law any time soon….(More)”

The promises — and challenges — of data collaboratives for the SDGs


Paula Hidalgo-Sanchis and Stefaan G. Verhulst at Devex: “As the road to achieving the Sustainable Development Goals becomes more complex and challenging, policymakers around the world need both new solutions and new ways to become more innovative. This includes better policy and program design based on evidence to solve problems at scale. The use of big data — the vast majority of which is collected, processed, and analyzed by the private sector — is key.

In the past few months, we at UN Global Pulse and The GovLab have sought to understand pathways to make policymaking more evidence-based and data-driven with the use of big data. Working in parallel at both local and global scale, we have conducted extensive desk research, held a series of workshops, and conducted in-depth conversations and interviews with key stakeholders, including government, civil society, and private sector representatives.

Our work is driven by a recognition of the potential of use of privately processed data through data collaboratives — a new form of public-private partnership in which government, private industry, and civil society work together to release previously siloed data, making it available to address the challenges of our era.

Research suggests that data collaboratives offer tremendous potential when implemented strategically under the appropriate policy and ethical frameworks. Nonetheless, this remains a nascent field, and we have summarized some of the barriers that continue to confront data collaboratives, with an eye toward ultimately proposing solutions to make them more effective, scalable, sustainable, and responsible.

Here are seven challenges…(More)”.

Democracy From Above? The Unfulfilled Promise of Nationally Mandated Participatory Reforms


Book by Stephanie L. McNulty: “People are increasingly unhappy with their governments in democracies around the world. In countries as diverse as India, Ecuador, and Uganda, governments are responding to frustrations by mandating greater citizen participation at the local and state level. Officials embrace participatory reforms, believing that citizen councils and committees lead to improved accountability and more informed communities. Yet there’s been little research on the efficacy of these efforts to improve democracy, despite an explosion in their popularity since the mid-1980s.Democracy from Above? tests the hypothesis that top-down reforms strengthen democracies and evaluates the conditions that affect their success.

Stephanie L. McNulty addresses the global context of participatory reforms in developing nations. She observes and interprets what happens after greater citizen involvement is mandated in seventeen countries, with close case studies of Guatemala, Bolivia, and Peru. The first cross-national comparison on this issue,Democracy from Above? explores whether the reforms effectively redress the persistent problems of discrimination, elite capture, clientelism, and corruption in the countries that adopt them. As officials and reformers around the world and at every level of government look to strengthen citizen involvement and confidence in the political process, McNulty provides a clear understanding of the possibilities and limitations of nationally mandated participatory reforms…(More)”.

Blockchain’s Occam problem


Report by Matt Higginson, Marie-Claude Nadeau, and Kausik Rajgopal: “Blockchain has yet to become the game-changer some expected. A key to finding the value is to apply the technology only when it is the simplest solution available.

Blockchain over recent years has been extolled as a revolution in business technology. In the nine years since its launch, companies, regulators, and financial technologists have spent countless hours exploring its potential. The resulting innovations have started to reshape business processes, particularly in accounting and transactions.

Amid intense experimentation, industries from financial services to healthcare and the arts have identified more than 100 blockchain use cases. These range from new land registries, to KYC applications and smart contracts that enable actions from product processing to share trading. The most impressive results have seen blockchains used to store information, cut out intermediaries, and enable greater coordination between companies, for example in relation to data standards….

There is a clear sense that blockchain is a potential game-changer. However, there are also emerging doubts. A particular concern, given the amount of money and time spent, is that little of substance has been achieved. Of the many use cases, a large number are still at the idea stage, while others are in development but with no output. The bottom line is that despite billions of dollars of investment, and nearly as many headlines, evidence for a practical scalable use for blockchain is thin on the ground.

Infant technology

From an economic theory perspective, the stuttering blockchain development path is not entirely surprising. It is an infant technology that is relatively unstable, expensive, and complex. It is also unregulated and selectively distrusted. Classic lifecycle theory suggests the evolution of any industry or product can be divided into four stages: pioneering, growth, maturity, and decline (exhibit). Stage 1 is when the industry is getting started, or a particular product is brought to market. This is ahead of proven demand and often before the technology has been fully tested. Sales tend to be low and return on investment is negative. Stage 2 is when demand begins to accelerate, the market expands and the industry or product “takes off.”

Blockchain is struggling to emerge from the pioneering stage.
Exhibit

Across its many applications, blockchain arguably remains stuck at stage 1 in the lifecycle (with a few exceptions). The vast majority of proofs of concept (POCs) are in pioneering mode (or being wound up) and many projects have failed to get to Series C funding rounds.

One reason for the lack of progress is the emergence of competing technologies. In payments, for example, it makes sense that a shared ledger could replace the current highly intermediated system. However, blockchains are not the only game in town. Numerous fintechs are disrupting the value chain. Of nearly $12 billion invested in US fintechs last year, 60 percent was focused on payments and lending. SWIFT’s global payments innovation initiative (GPI), meanwhile, is addressing initial pain points through higher transaction speeds and increased transparency, building on bank collaboration….(More)” (See also: Blockchange)

Crowdsourced mapping in crisis zones: collaboration, organisation and impact


Amelia Hunt and Doug Specht in the Journal of International Humanitarian Action:  “Crowdsourced mapping has become an integral part of humanitarian response, with high profile deployments of platforms following the Haiti and Nepal earthquakes, and the multiple projects initiated during the Ebola outbreak in North West Africa in 2014, being prominent examples. There have also been hundreds of deployments of crowdsourced mapping projects across the globe that did not have a high profile.

This paper, through an analysis of 51 mapping deployments between 2010 and 2016, complimented with expert interviews, seeks to explore the organisational structures that create the conditions for effective mapping actions, and the relationship between the commissioning body, often a non-governmental organisation (NGO) and the volunteers who regularly make up the team charged with producing the map.

The research suggests that there are three distinct areas that need to be improved in order to provide appropriate assistance through mapping in humanitarian crisis: regionalise, prepare and research. The paper concludes, based on the case studies, how each of these areas can be handled more effectively, concluding that failure to implement one area sufficiently can lead to overall project failure….(More)”

Smart cities could be lousy to live in if you have a disability


Elizabeth Woyke in MIT Technology Review: “People with disabilities affecting mobility, vision, hearing, and cognitive function often move to cities to take advantage of their comprehensive transit systems and social services. But US law doesn’t specify how municipalities should design and implement digital services for disabled people. As a result, cities sometimes adopt new technologies that can end up causing, rather than resolving, problems of accessibility.

Nowhere was this more evident than with New York City’s LinkNYC kiosks, which were installed on sidewalks in 2016 without including instructions in Braille or audible form. Shortly after they went in, the American Federation for the Blind sued the city. The suit was settled in 2017 and the kiosks have been updated, but Pineda says touch screens in general are still not fully accessible to people with disabilities.

Also problematic: the social-media-based apps that some municipal governments have started using to solicit feedback from residents. Blind and low-vision people typically can’t use the apps, and people over 65 are less likely to, says James Thurston, a vice president at the nonprofit G3ict, which promotes accessible information and communication technologies. “Cities may think they’re getting data from all their residents, but if those apps aren’t accessible, they’re leaving out the voices of large chunks of their population,” he says….

Even for city officials who have these issues on their minds, knowing where to begin can be difficult. Smart Cities for All, an initiative led by Thurston and Pineda, aims to help by providing free, downloadable tools that cities can use to analyze their technology and find more accessible options. One is a database of hundreds of pre-vetted products and services. Among the entries are Cyclomedia, which uses lidar data to determine when city sidewalks need maintenance, and ZenCity, a data analytics platform that uses AI to gauge what people are saying about a city’s level of accessibility. 

This month, the group will kick off a project working with officials in Chicago to grade the city on how well it supports people with disabilities. One key part of the project will be ensuring the accessibility of a new 311 phone system being introduced as a general portal to city services. The group has plans to expand to several other US cities this year, but its ultimate aim is to turn the work into a global movement. It’s met with governments in India and Brazil as well as Sidewalk Labs, the Alphabet subsidiary that is developing a smart neighborhood in Toronto….(More)”.

IBM aims to use crowdsourced sensor data to improve local weather forecasting globally


Larry Dignan at ZDN: “IBM is hoping that mobile barometric sensors from individuals opting in, supercomputing ,and the Internet of Things can make weather forecasting more local globally.

Big Blue, which owns The Weather Company, will outline the IBM Global High-Resolution Atmospheric Forecasting System (GRAF). GRAF incorporates IoT data in its weather models via crowdsourcing.

While hyper local weather forecasts are available in the US, Japan, and some parts of Western Europe, many regions in the world lack an accurate picture of weather.

Mary Glackin, senior vice president of The Weather Company, said the company is “trying to fill in the blanks.” She added, “In a place like India, weather stations are kilometers away. We think this can be as significant as bringing satellite data into models.”

For instance, the developing world gets forecasts based on global data that are updated every 6 hours and resolutions at 10km to 15km. By using GRAF, IBM said it can offer forecasts for the day ahead that are updated hourly on average and have a 3km resolution….(More)”.

A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility Within a Human Rights Framework


Report by Karen Yeung: “This study was commissioned by the Council of Europe’s Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). It was prompted by concerns about the potential adverse consequences of advanced digital technologies (including artificial intelligence (‘AI’)), particularly their impact on the enjoyment of human rights and fundamental freedoms. This draft report seeks to examine the implications of these technologies for the concept of responsibility, and this includes investigating where responsibility should lie for their adverse consequences. In so doing, it seeks to understand (a) how human rights and fundamental freedoms protected under the ECHR may be adversely affected by the development of AI technologies and (b) how responsibility for those risks and consequences should be allocated. 

Its methodological approach is interdisciplinary, drawing on concepts and academic scholarship from the humanities, the social sciences and, to a more limited extent, from computer science. It concludes that, if we are to take human rights seriously in a hyperconnected digital age, we cannot allow the power of our advanced digital technologies and systems, and those who develop and implement them, to be accrued and exercised without responsibility. Nations committed to protecting human rights must therefore ensure that those who wield and derive benefits from developing and deploying these technologies are held responsible for their risks and consequences. This includes obligations to ensure that there are effective and legitimate mechanisms that will operate to prevent and forestall violations to human rights which these technologies may threaten, and to attend to the health of the larger collective and shared socio-technical environment in which human rights and the rule of law are anchored….(More)”.

Political Selection and Bureaucratic Productivity


Paper by James P. Habyarimana et al: “Economic theory of public bureaucracies as complex organizations predicts that bureaucratic productivity can be shaped by the selection of different types of agents, beyond their incentives. This theory applies to the institutions of local government in the developing world, where nationally appointed bureaucrats and locally elected politicians together manage the implementation of public policies and the delivery of services. Yet, there is no evidence on whether (which) selection traits of these bureaucrats and politicians matter for the productivity of local bureaucracies.

This paper addresses the empirical gap by gathering rich data in an institutional context of district governments in Uganda, which is typical of the local state in poor countries. The paper measures traits such as the integrity, altruism, personality, and public service motivation of bureaucrats and politicians. It finds robust evidence that higher integrity among locally elected politicians is associated with substantively better delivery of public health services by district bureaucracies. Together with the theory, this evidence suggests that policy makers seeking to build local state capacity in poor countries should take political selection seriously….(More)”.