Paper by Vincent Tawiah, Barnes Evans and Abdulrasheed Zakari: “Despite the extensive empirical literature on aid effectiveness, existing studies have not addressed directly how political ideology affects the use of foreign aid in the recipient country. This study, therefore, uses a unique dataset of 12 democratic countries in Africa to investigate the impact of political ideologies on aid effectiveness. Our results indicate that each political party uses aid differently in peruse of their political, ideological orientation. Further analyses suggest that rightist capitalist parties are likely to use aid to improve the private sector environment. Leftist socialist on the other hand, use aid effectively on pro-poor projects such as short-term poverty reduction, mass education and health services. Our additional analysis on the lines of colonialisation shows that the difference in the use of aid by political parties is much stronger in French colonies than Britain colonies. The study provides insight on how the recipient government are likely to use foreign aid….(More)”.
Data to the rescue
Podcast by Kenneth Cukier: “Access to the right data can be as valuable in humanitarian crises as water or medical care, but it can also be dangerous. Misused or in the wrong hands, the same information can put already vulnerable people at further risk. Kenneth Cukier hosts this special edition of Babbage examining how humanitarian organisations use data and what they can learn from the profit-making tech industry. This episode was recorded live from Wilton Park, in collaboration with the United Nations OCHA Centre for Humanitarian Data…(More)”.
Data Collaboration for the Common Good: Enabling Trust and Innovation Through Public-Private Partnerships
World Economic Forum Report: “As the digital technologies of the Fourth Industrial Revolution continue to drive change throughout all sectors of the global economy, a unique moment exists to create a more inclusive, innovative and resilient society. Central to this change is the use of data. It is abundantly available but if improperly used will be the source of dangerous and unwelcome results.
When data is shared, linked and combined across sectoral and institutional boundaries, a multiplier effect occurs. Connecting one bit with another unlocks new insights and understandings that often weren’t anticipated. Yet, due to commercial limits and liabilities, the full value of data is often unrealized. This is particularly true when it comes to using data for the common good. While public-private data collaborations represent an unprecedented opportunity to address some of the world’s most urgent and complex challenges, they have generally been small and limited in impact. An entangled set of legal, technical, social, ethical and commercial risks have created an environment where the incentives for innovation have stalled. Additionally, the widening lack of trust among individuals and institutions creates even more uncertainty. After nearly a decade of anticipation on the promise of public-private data collaboration – with relatively few examples of success at global scale – a pivotal moment has arrived to encourage progress and move forward….(More)”
(See also http://datacollaboratives.org/).
Pitfalls of Aiming to Empower the Bottom from the Top: The Case of Philippine Participatory Budgeting
Paper by Joy Aceron: “… explains why and how a reform program that opened up spaces for participatory budgeting was ultimately unable to result in pro-citizen power shifts that transformed governance. The study reviews the design and implementation of Bottom-Up Budgeting (BuB), the nationwide participatory budgeting (PB) program in the Philippines, which ran from 2012 to 2016 under the Benigno Aquino government. The findings underscore the importance of institutional design to participatory governance reforms. BuB’s goal was to transform local government by providing more space for civil society organizations (CSOs) to co-identify projects with the government and to take part in the budgeting process, but it did not strengthen CSO or grassroots capacity to hold their Local Government Units (LGUs) accountable.
The BuB design had features that delivered positive gains towards citizen empowerment, including: (1) providing equal seats for CSOs in the Local Poverty Reduction Action Team (LPRAT), which are formally mandated to select proposed projects (in contrast to the pre-existing Local Development Councils (LDCs), which have only 25 percent CSO representation); (2) CSOs identified their LPRAT representatives themselves (as opposed to local chief executives choosing CSO representatives, as in the LDCs); and (3) LGUs were mandated to follow participatory requirements to receive additional funding. However, several aspects of the institutional design shifted power from local governments to the central government. This had a “centralizing effect”…
This study argues that because of these design problems, BuB fell short in achieving its main political reform agenda of empowering the grassroots—particularly in enabling downward accountability that could have enabled lasting pro-citizen power shifts. It did not empower local civil society and citizens to become a countervailing force vis-à-vis local politicians in fiscal governance. BuB is a case of a reform that provided a procedural mechanism for civil society input into national agency decisions but was unable to improve government responsiveness. It provided civil society with ‘voice’, but was constrained in enabling ‘teeth’. Jonathan Fox (2014) refers to “voice” as citizen inputs, feedback and action, while “teeth” refer to the capacity of the state to respond to voice.
Finally, the paper echoes the results of other studies which find that PB programs become successful when complemented by other institutional and state democratic capacity-building reforms and when they are part of a broader progressive change agenda. The BuB experience suggests that to bolster citizen oversight, it is essential to invest sufficient support and resources in citizen empowerment and in creating an enabling environment for citizen oversight….(More)”.
Opportunities and Challenges of Emerging Technologies for the Refugee System
Research Paper by Roya Pakzad: “Efforts are being made to use information and communications technologies (ICTs) to improve accountability in providing refugee aid. However, there remains a pressing need for increased accountability and transparency when designing and deploying humanitarian technologies. This paper outlines the challenges and opportunities of emerging technologies, such as machine learning and blockchain, in the refugee system.
The paper concludes by recommending the creation of quantifiable metrics for sharing information across both public and private initiatives; the creation of the equivalent of a “Hippocratic oath” for technologists working in the humanitarian field; the development of predictive early-warning systems for human rights abuses; and greater accountability among funders and technologists to ensure the sustainability and real-world value of humanitarian apps and other digital platforms….(More)”
The State of Open Data
Open Access Book edited by Tim Davies, Stephen B. Walker, Mor Rubinstein and Fernando Perini: “It’s been ten years since open data first broke onto the global stage. Over the past decade, thousands of programmes and projects around the world have worked to open data and use it to address a myriad of social and economic challenges. Meanwhile, issues related to data rights and privacy have moved to the centre of public and political discourse. As the open data movement enters a new phase in its evolution, shifting to target real-world problems and embed open data thinking into other existing or emerging communities of practice, big questions still remain. How will open data initiatives respond to new concerns about privacy, inclusion, and artificial intelligence? And what can we learn from the last decade in order to deliver impact where it is most needed?
The State of Open Data brings together over 60 authors from around the world to address these questions and to take stock of the real progress made to date across sectors and around the world, uncovering the issues that will shape the future of open data in the years to come….(More)”.
The Third Pillar: How Markets and the State leave the Community behind
Book by Raghuram Rajan: “….In The Third Pillar he offers up a magnificent big-picture framework for understanding how these three forces–the state, markets, and our communities–interact, why things begin to break down, and how we can find our way back to a more secure and stable plane.
The “third pillar” of the title is the community we live in. Economists all too often understand their field as the relationship between markets and the state, and they leave squishy social issues for other people. That’s not just myopic, Rajan argues; it’s dangerous. All economics is actually socioeconomics – all markets are embedded in a web of human relations, values and norms. As he shows, throughout history, technological phase shifts have ripped the market out of those old webs and led to violent backlashes, and to what we now call populism. Eventually, a new equilibrium is reached, but it can be ugly and messy, especially if done wrong.
Right now, we’re doing it wrong. As markets scale up, the state scales up with it, concentrating economic and political power in flourishing central hubs and leaving the periphery to decompose, figuratively and even literally. Instead, Rajan offers a way to rethink the relationship between the market and civil society and argues for a return to strengthening and empowering local communities as an antidote to growing despair and unrest. Rajan is not a doctrinaire conservative, so his ultimate argument that decision-making has to be devolved to the grass roots or our democracy will continue to wither, is sure to be provocative. But even setting aside its solutions, The Third Pillar is a masterpiece of explication, a book that will be a classic of its kind for its offering of a wise, authoritative and humane explanation of the forces that have wrought such a sea change in our lives….(More)”.
AI & Global Governance: Robots Will Not Only Wage Future Wars but also Future Peace
Daanish Masood & Martin Waehlisch at the United Nations University: “At the United Nations, we have been exploring completely different scenarios for AI: its potential to be used for the noble purposes of peace and security. This could revolutionize the way of how we prevent and solve conflicts globally.
Two of the most promising areas are Machine Learning and Natural Language Processing. Machine Learning involves computer algorithms detecting patterns from data to learn how to make predictions and recommendations. Natural Language Processing involves computers learning to understand human languages.
At the UN Secretariat, our chief concern is with how these emerging technologies can be deployed for the good of humanity to de-escalate violence and increase international stability.
This endeavor has admirable precedent. During the Cold War, computer scientists used multilayered simulations to predict the scale and potential outcome of the arms race between the East and the West.
Since then, governments and international agencies have increasingly used computational models and advanced Machine Learning to try to understand recurrent conflict patterns and forecast moments of state fragility.
But two things have transformed the scope for progress in this field.
The first is the sheer volume of data now available from what people say and do online. The second is the game-changing growth in computational capacity that allows us to crunch unprecedented, inconceivable quantities data with relative speed and ease.
So how can this help the United Nations build peace? Three ways come to mind.
Firstly, overcoming cultural and language barriers. By teaching computers to understand human language and the nuances of dialects, not only can we better link up what people write on social media to local contexts of conflict, we can also more methodically follow what people say on radio and TV. As part of the UN’s early warning efforts, this can help us detect hate speech in a place where the potential for conflict is high. This is crucial because the UN often works in countries where internet coverage is low, and where the spoken languages may not be well understood by many of its international staff.
Natural Language Processing algorithms can help to track and improve understanding of local debates, which might well be blind spots for the international community. If we combine such methods with Machine Learning chatbots, the UN could conduct large-scale digital focus groups with thousands in real-time, enabling different demographic segments in a country to voice their views on, say, a proposed peace deal – instantly testing public support, and indicating the chances of sustainability.
Secondly, anticipating the deeper drivers of conflict. We could combine new imaging techniques – whether satellites or drones – with automation. For instance, many parts of the world are experiencing severe groundwater withdrawal and water aquifer depletion. Water scarcity, in turn, drives conflicts and undermines stability in post-conflict environments, where violence around water access becomes more likely, along with large movements of people leaving newly arid areas.
One of the best predictors of water depletion is land subsidence or sinking, which can be measured by satellite and drone imagery. By combining these imaging techniques with Machine Learning, the UN can work in partnership with governments and local communities to anticipate future water conflicts and begin working proactively to reduce their likelihood.
Thirdly, advancing decision making. In the work of peace and security, it is surprising how many consequential decisions are still made solely on the basis of intuition.
Yet complex decisions often need to navigate conflicting goals and undiscovered options, against a landscape of limited information and political preference. This is where we can use Deep Learning – where a network can absorb huge amounts of public data and test it against real-world examples on which it is trained while applying with probabilistic modeling. This mathematical approach can help us to generate models of our uncertain, dynamic world with limited data.
With better data, we can eventually make better predictions to guide complex decisions. Future senior peace envoys charged with mediating a conflict would benefit from such advances to stress test elements of a peace agreement. Of course, human decision-making will remain crucial, but would be informed by more evidence-driven robust analytical tools….(More)”.
Digital Data for Development
LinkedIn: “The World Bank Group and LinkedIn share a commitment to helping workers around the world access opportunities that make good use of their talents and skills. The two organizations have come together to identify new ways that data from LinkedIn can help inform policymakers who seek to boost employment and grow their economies.
This site offers data and automated visuals of industries where LinkedIn data is comprehensive enough to provide an emerging picture. The data complements a wealth of official sources and can offer a more real-time view in some areas particularly for new, rapidly changing digital and technology industries.
The data shared in the first phase of this collaboration focuses on 100+ countries with at least 100,000 LinkedIn members each, distributed across 148 industries and 50,000 skills categories. In the near term, it will help World Bank Group teams and government partners pinpoint ways that developing countries could stimulate growth and expand opportunity, especially as disruptive technologies reshape the economic landscape. As LinkedIn’s membership and digital platforms continue to grow in developing countries, this collaboration will assess the possibility to expand the sectors and countries covered in the next annual update.
This site offers downloadable data, visualizations, and an expanding body of insights and joint research from the World Bank Group and LinkedIn. The data is being made accessible as a public good, though it will be most useful for policy analysts, economists, and researchers….(More)”.
Facebook’s AI team maps the whole population of Africa
Devin Coldewey at TechCrunch: “A new map of nearly all of Africa shows exactly where the continent’s 1.3 billion people live, down to the meter, which could help everyone from local governments to aid organizations. The map joins others like it from
It’s not exactly that there was some mystery about where people live, but the degree of precision matters. You may know that a million people live in a given region, and that about half are in the bigger city and another quarter in assorted towns. But that leaves hundreds of thousands only accounted for in the vaguest way.
Fortunately, you can always inspect satellite imagery and pick out the spots where small villages and isolated houses and communities are located. The only problem is that Africa is big. Really big. Manually labeling the satellite imagery even from a single mid-sized country like Gabon or Malawi would take a huge amount of time and effort. And for many applications of the data, such as coordinating the response to a natural disaster or distributing vaccinations, time lost is lives lost.
Better to get it all done at once then, right? That’s the idea behind Facebook’s Population Density Maps project, which had already mapped several countries over the last couple of years before the decision was made to take on the entire African continent….
“The maps from Facebook ensure we focus our volunteers’ time and resources on the places they’re most needed, improving the efficacy of our programs,” said Tyler Radford, executive director of the Humanitarian OpenStreetMap Team, one of the project’s partners.
The core idea is straightforward: Match census data (how many people live in a region) with structure data derived from satellite imagery to get a much better idea of where those people are located.
“With just the census data, the best you can do is assume that people live everywhere in the district – buildings, fields, and forests alike,” said Facebook engineer James Gill. “But once you know the building locations, you can skip the fields and forests and only allocate the population to the buildings. This gives you very detailed 30 meter by 30 meter population maps.”
That’s several times more accurate than any extant population map of this size. The analysis is done by a machine learning agent trained on OpenStreetMap data from all over the world, where people have labeled and outlined buildings and other features.