How randomised trials became big in development economics


Seán Mfundza Muller, Grieve Chelwa, and Nimi Hoffmann at the Conversation: “…One view of the challenge of development is that it is fundamentally about answering causal questions. If a country adopts a particular policy, will that cause an increase in economic growth, a reduction in poverty or some other improvement in the well-being of citizens?

In recent decades economists have been concerned about the reliability of previously used methods for identifying causal relationships. In addition to those methodological concerns, some have argued that “grand theories of development” are either incorrect or at least have failed to yield meaningful improvements in many developing countries.

Two notable examples are the idea that developing countries may be caught in a poverty trap that requires a “big push” to escape and the view that institutions are key for growth and development.

These concerns about methods and policies provided a fertile ground for randomised experiments in development economics. The surge of interest in experimental approaches in economics began in the early 1990s. Researchers began to use “natural experiments”, where for example random variation was part of a policy rather than decided by a researcher, to look at causation.

But it really gathered momentum in the 2000s, with researchers such as the Nobel awardees designing and implementing experiments to study a wide range of microeconomic questions.

Randomised trials

Proponents of these methods argued that a focus on “small” problems was more likely to succeed. They also argued that randomised experiments would bring credibility to economic analysis by providing a simple solution to causal questions.

These experiments randomly allocate a treatment to some members of a group and compare the outcomes against the other members who did not receive treatment. For example, to test whether providing credit helps to grow small firms or increase their likelihood of success, a researcher might partner with a financial institution and randomly allocate credit to applicants that meet certain basic requirements. Then a year later the researcher would compare changes in sales or employment in small firms that received the credit to those that did not.

Randomised trials are not a new research method. They are best known for their use in testing new medicines. The first medical experiment to use controlled randomisation occurred in the aftermath of the second world war. The British government used it to assess the effectiveness of a drug for tuberculosis treatment.

In the early 20th century and mid-20th century American researchers had used experiments like this to examine the effects of various social policies. Examples included income protection and social housing.

The introduction of these methods into development economics also followed an increase in their use in other areas of economics. One example was the study of labour markets.

Randomised control trials in economics are now mostly used to evaluate the impact of social policy interventions in poor and middle-income countries. Work by the 2019 Nobel awardees – Michael Kremer, Abhijit Banerjee and Esther Duflo – includes experiments in Kenya and India on teacher attendancetextbook provisionmonitoring of nurse attendance and the provision of microcredit.

The popularity, among academics and policymakers, of the approach is not only due to its seeming ability to solve methodological and policy concerns. It is also due to very deliberate, well-funded advocacy by its proponents….(More)”.

A World With a Billion Cameras Watching You Is Just Around the Corner


Liza Lin and Newley Purnell at the Wall Street Journal: “As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report.

The report, from industry researcher IHS Markit, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total.

Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. The number of surveillance cameras in the U.S. would grow to 85 million by 2021, from 70 million last year, as American schools, malls and offices seek to tighten security on their premises, IHS analyst Oliver Philippou said.

Mr. Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.

“It’s a public-safety issue,” Mr. Philippou said in an interview. “There is a big focus on crime and terrorism in recent years.”

The global security-camera industry has been energized by breakthroughs in image quality and artificial intelligence. These allow better and faster facial recognition and video analytics, which governments are using to do everything from managing traffic to predicting crimes.

China leads the world in the rollout of this kind of technology. It is home to the world’s largest camera makers, with its cameras on street corners, along busy roads and in residential neighborhoods….(More)”.

Facial recognition needs a wider policy debate


Editorial Team of the Financial Times: “In his dystopian novel 1984, George Orwell warned of a future under the ever vigilant gaze of Big Brother. Developments in surveillance technology, in particular facial recognition, mean the prospect is no longer the stuff of science fiction.

In China, the government was this year found to have used facial recognition to track the Uighurs, a largely Muslim minority. In Hong Kong, protesters took down smart lamp posts for fear of their actions being monitored by the authorities. In London, the consortium behind the King’s Cross development was forced to halt the use of two cameras with facial recognition capabilities after regulators intervened. All over the world, companies are pouring money into the technology.

At the same time, governments and law enforcement agencies of all hues are proving willing buyers of a technology that is still evolving — and doing so despite concerns over the erosion of people’s privacy and human rights in the digital age. Flaws in the technology have, in certain cases, led to inaccuracies, in particular when identifying women and minorities.

The news this week that Chinese companies are shaping new standards at the UN is the latest sign that it is time for a wider policy debate. Documents seen by this newspaper revealed Chinese companies have proposed new international standards at the International Telecommunication Union, or ITU, a Geneva-based organisation of industry and official representatives, for things such as facial recognition. Setting standards for what is a revolutionary technology — one recently described as the “plutonium of artificial intelligence” — before a wider debate about its merits and what limits should be imposed on its use, can only lead to unintended consequences. Crucially, standards ratified in the ITU are commonly adopted as policy by developing nations in Africa and elsewhere — regions where China has long wanted to expand its influence. A case in point is Zimbabwe, where the government has partnered with Chinese facial recognition company CloudWalk Technology. The investment, part of Beijing’s Belt and Road investment in the country, will see CloudWalk technology monitor major transport hubs. It will give the Chinese company access to valuable data on African faces, helping to improve the accuracy of its algorithms….

Progress is needed on regulation. Proposals by the European Commission for laws to give EU citizens explicit rights over the use of their facial recognition data as part of a wider overhaul of regulation governing artificial intelligence are welcome. The move would bolster citizens’ protection above existing restrictions laid out under its general data protection regulation. Above all, policymakers should be mindful that if the technology’s unrestrained rollout continues, it could hold implications for other, potentially more insidious, innovations. Western governments should step up to the mark — or risk having control of the technology’s future direction taken from them….(More)”.

Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.

The Digital Roadmap


Report by the Pathway for Prosperity Commission: “The Digital Roadmap presents an overarching vision for a globally connected world that both delivers on the opportunities presented by technology, and limits downside risks. Importantly, it also sets out how this vision can be achieved.

Craft a digital compact for inclusive development

Embracing country-wide digital change will be disruptive. Navigating it requires coordinated action. Reconfiguring an economy will result in some resistance. The best way to achieve buy-in, and to balance trade-offs, is through dialogue: the private sector and civil society in its broadest sense (including community leaders, academia, trade unions, NGOs, and faith groups). The political economy of upheaval is difficult, but change can be managed with discussions that are inclusive of multiple groups. These dialogues should result in a national digital compact: a shared vision of the future to which everyone commits. The Pathways Commission has supported three countries – Ethiopia, Mongolia and South Africa – as they each developed country-wide digital strategies, using the Digital Economy Kit.

Put people at the centre of the digital future

Rapid technological affects peoples’ lives.Failure to put people at the centre of social and economic change can lead to social unrest. The pace and intensity of change means it’s all the more important that people are at the centre of the digital future – not the technology. This requires equipping people to benefit from opportunities, while also protecting them from the potential harms of the digital age. Governments should take responsibility for ensuring that vocational education is truly useful for workers and for business in the digital age. The private sector needs to be involved in keeping curricula up to date.

Build the digital essentials

Digital products and services cannot be created in a vacuum – essential components need to be in place: physical infrastructure, foundational digital systems (such as digital identification and mobile money), and capital to invest in innovation. These are the basic ingredients needed for existing firms to adopt more productive technologies, and for digital entrepreneurs to build and innovate. Having reliable infrastructure and interoperable systems means that firms and service providers can focus on their core business, without having to build an enabling environment from scratch.

Reach everyone with digital technologies

If technology is to be a force for development for everyone, it must reach everyone.Just over half of the world’s population is connected to a digital life; for the rest, digital opportunities don’t mean much. Without digital connections, people can’t participate in digital work platforms, benefit from new technologies in education, or engage with government services online. Women, people with lower levels of education, and people in poverty are usually those who lack digital access. Reaching everyone requires looking beyond current business models. The private sector needs to design for inclusion, ensuring the poorest and most marginalised consumers, to ensure they are not left even further behind.

Govern technology for the future

The unprecedented pace of change and emergence of new risks in the digital era (such as algorithmic bias, cybersecurity, and threats to privacy) are creating headaches for even the most well-resourced countries. For developing countries, the challenges are even bigger. Digital technologies fundamentally shape what people do and how they do it: freelancers may face algorithms that determine chances to get hired. Banks might face a financial system with heightened risk from new, non-bank deposit holders. These issues, and many others, require new and adaptive approaches to decision-making. Emerging global norms will need to consider the needs of developing countries….(More)”.

Government at a Glance 2019


OECD Report: “Government at a Glance provides reliable, internationally comparative data on government activities and their results in OECD countries. Where possible, it also reports data for Brazil, China, Colombia, Costa Rica, India, Indonesia, the Russian Federation and South Africa. In many public governance areas, it is the only available source of data. It includes input, process, output and outcome indicators as well as contextual information for each country.

The 2019 edition includes input indicators on public finance and employment; while processes include data on institutions, budgeting practices and procedures, human resources management, regulatory government, public procurement and digital government and open data. Outcomes cover core government results (e.g. trust, inequality reduction) and indicators on access, responsiveness, quality and citizen satisfaction for the education, health and justice sectors.

Governance indicators are especially useful for monitoring and benchmarking governments’ progress in their public sector reforms.Each indicator in the publication is presented in a user-friendly format, consisting of graphs and/or charts illustrating variations across countries and over time, brief descriptive analyses highlighting the major findings conveyed by the data, and a methodological section on the definition of the indicator and any limitations in data comparability….(More)”.

Voting could be the problem with democracy


Bernd Reiter at The Conversation: “Around the globe, citizens of many democracies are worried that their governments are not doing what the people want.

When voters pick representatives to engage in democracy, they hope they are picking people who will understand and respond to constituents’ needs. U.S. representatives have, on average, more than 700,000 constituents each, making this task more and more elusive, even with the best of intentions. Less than 40% of Americans are satisfied with their federal government.

Across Europe, South America, the Middle East and China, social movements have demanded better government – but gotten few real and lasting results, even in those places where governments were forced out.

In my work as a comparative political scientist working on democracy, citizenship and race, I’ve been researching democratic innovations in the past and present. In my new book, “The Crisis of Liberal Democracy and the Path Ahead: Alternatives to Political Representation and Capitalism,” I explore the idea that the problem might actually be democratic elections themselves.

My research shows that another approach – randomly selecting citizens to take turns governing – offers the promise of reinvigorating struggling democracies. That could make them more responsive to citizen needs and preferences, and less vulnerable to outside manipulation….

For local affairs, citizens can participate directly in local decisions. In Vermont, the first Tuesday of March is Town Meeting Day, a public holiday during which residents gather at town halls to debate and discuss any issue they wish.

In some Swiss cantons, townspeople meet once a year, in what are called Landsgemeinden, to elect public officials and discuss the budget.

For more than 30 years, communities around the world have involved average citizens in decisions about how to spend public money in a process called “participatory budgeting,” which involves public meetings and the participation of neighborhood associations. As many as 7,000 towns and cities allocate at least some of their money this way.

The Governance Lab, based at New York University, has taken crowd-sourcing to cities seeking creative solutions to some of their most pressing problems in a process best called “crowd-problem solving.” Rather than leaving problems to a handful of bureaucrats and experts, all the inhabitants of a community can participate in brainstorming ideas and selecting workable possibilities.

Digital technology makes it easier for larger groups of people to inform themselves about, and participate in, potential solutions to public problems. In the Polish harbor city of Gdansk, for instance, citizens were able to help choose ways to reduce the harm caused by flooding….(More)”.

The Rising Threat of Digital Nationalism


Essay by Akash Kapur in the Wall Street Journal: “Fifty years ago this week, at 10:30 on a warm night at the University of California, Los Angeles, the first email was sent. It was a decidedly local affair. A man sat in front of a teleprinter connected to an early precursor of the internet known as Arpanet and transmitted the message “login” to a colleague in Palo Alto. The system crashed; all that arrived at the Stanford Research Institute, some 350 miles away, was a truncated “lo.”

The network has moved on dramatically from those parochial—and stuttering—origins. Now more than 200 billion emails flow around the world every day. The internet has come to represent the very embodiment of globalization—a postnational public sphere, a virtual world impervious and even hostile to the control of sovereign governments (those “weary giants of flesh and steel,” as the cyberlibertarian activist John Perry Barlow famously put it in his Declaration of the Independence of Cyberspace in 1996).

But things have been changing recently. Nicholas Negroponte, a co-founder of the MIT Media Lab, once said that national law had no place in cyberlaw. That view seems increasingly anachronistic. Across the world, nation-states have been responding to a series of crises on the internet (some real, some overstated) by asserting their authority and claiming various forms of digital sovereignty. A network that once seemed to effortlessly defy regulation is being relentlessly, and often ruthlessly, domesticated.

From firewalls to shutdowns to new data-localization laws, a specter of digital nationalism now hangs over the network. This “territorialization of the internet,” as Scott Malcomson, a technology consultant and author, calls it, is fundamentally changing its character—and perhaps even threatening its continued existence as a unified global infrastructure.

The phenomenon of digital nationalism isn’t entirely new, of course. Authoritarian governments have long sought to rein in the internet. China has been the pioneer. Its Great Firewall, which restricts what people can read and do online, has served as a model for promoting what the country calls “digital sovereignty.” China’s efforts have had a powerful demonstration effect, showing other autocrats that the internet can be effectively controlled. China has also proved that powerful tech multinationals will exchange their stated principles for market access and that limiting online globalization can spur the growth of a vibrant domestic tech industry.

Several countries have built—or are contemplating—domestic networks modeled on the Chinese example. To control contact with the outside world and suppress dissident content, Iran has set up a so-called “halal net,” North Korea has its Kwangmyong network, and earlier this year, Vladimir Putin signed a “sovereign internet bill” that would likewise set up a self-sufficient Runet. The bill also includes a “kill switch” to shut off the global network to Russian users. This is an increasingly common practice. According to the New York Times, at least a quarter of the world’s countries have temporarily shut down the internet over the past four years….(More)”

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.