Iwa Salami at The Conversation: “At the launch of bitcoin in 2009 the size of the potential of the underlying technology, the blockchain, was not fully appreciated.
What has not been fully exploited is the unique features of blockchain technology that can improve the lives of people and businesses. These include the fact that it is an open source software. This makes its source code legally and freely available to end-users who can use it to create new products and services. Another significant feature is that it is decentralised, democratising the operation of the services built on it. Control of the services built on the blockchain isn’t in the hands of an individual or a single entity but involves all those connected to the network.
In addition, it enables peer to peer interaction between those connected to the network. This is key as it enables parties to transact directly without using intermediaries or third parties. Finally, it has inbuilt security. Data stored on it is immutable and cannot be changed easily. New data can be added only after it is verified by everyone in the network.
Unfortunately, bitcoin, the project that introduced blockchain technology, has hogged the limelight, diverting attention from the technology’s underlying potential benefits….
But this is slowly changing.
A few companies have begun showcasing blockchain capabilities to various African countries. Unlike most other cryptocurrency blockchains which focus on private sector use in developed regions like Europe and North America, their approach has been to target the governments and public institutions in the developing world.
In April the Ethiopian government confirmed that it had signed a deal to create a national database of student and teacher IDs using a decentralised digital identity solution. The deal involves providing IDs for 5 million students across 3,500 schools which will be used to store educational records.
This is the largest blockchain deal ever to be signed by a government and has been making waves in the crypto-asset industry.
I believe that the deal marks a watershed moment for the use of blockchain and the crypto-asset industry, and for African economies because it offers the promise of blockchain being used for real socio-economic change. The deal means that blockchain technology will be used to provide digital identity to millions of Ethiopians. Digital identity – missing in most African countries – is the first step to real financial inclusion, which in turn has been shown to carry a host of benefits….(More)”.
Book by Karen Wendt: “Today, it has become strikingly obvious that companies no longer operate in an environment where only risk return and volatility describe the business environment. The business has to deal with volatility plus uncertainty, plus complexity and ambiguity (VUCA): that requires new qualities, competencies, frameworks; and it demands a new mind set to deal with the VUCA environment in investment, funding and financing. This book builds on a new megatrend beyond resilience, called anti-fragility. We have had the black swan (financial crisis) and the red swan (COVID) – the Bank for International Settlement is preparing for regenerative capitalism, block chain based analysis of financial streams and is aiming to prevent the “Green Swan” – the climate crisis to lead to the next lockdown. In the light of the UN 17 Sustainable Development Goals, what is required, is Theories of Change.
Written by experts working in the fields of sustainable finance, impact investing, development finance, carbon divesting, innovation, scaling finance, impact entrepreneurship, social stock exchanges, alternative currencies, Initial Coin Offerings (ICOs), ledger technologies, civil action, co-creation, impact management, deep learning and transformation leadership, the book begins by analysing existing Theories of Change frameworks from various disciplines and creating a new integrated model – the meta-framework. In turn, it presents insights on creating and using Theories of Change to redirect investment capital to sustainable companies while implementing the Sustainable Development Goals and the Paris Climate Agreement. Further, it discusses the perspective of planetary boundaries as defined by the Stockholm Resilience Institute, and investigates various aspects of systems, organizations, entrepreneurship, investment and finance that are closely tied to the mission ingrained in the Theory of Change. As it demonstrates, solutions that ensure the parity of profit, people and planet through dynamic change can effectively address the needs of entrepreneurs and business. By exploring these concepts and their application, the book helps create and shape new markets and opportunities….(More)”.
Report by PARIS21 and the Mo Ibrahim Foundation (MIF): “National statistics are an essential component of policymaking: they provide the evidence required to design policies that address the needs of citizens, to monitor results and hold governments to account. Data and policy are closely linked. As Mo Ibrahim puts it: “without data, governments drive blind”. However, there is evidence that the capacity of African governments for data-driven policymaking remains limited by a wide data-policy gap.
What is the data-policy gap? On the data side, statistical capacity across the continent has improved in recent decades. However, it remains low compared to other world regions and is hindered by several challenges. African national statistical offices (NSOs) often lack adequate financial and human resources as well as the capacity to provide accessible and available data. On the policy side, data literacy as well as a culture of placing data first in policy design and monitoring are still not widespread. Thus, investing in the basic building blocks of national statistics, such as civil registration, is often not a key priority.
At the same time, international development frameworks, such as the United Nations 2030 Agenda for Sustainable Development and the African Union Agenda 2063, require that every signatory country produce and use high-quality, timely and disaggregated data in order to shape development policies that leave no one behind and to fulfil reporting commitments.
Also, the new data ecosystem linked to digital technologies is providing an explosion of data sourced from non-state providers. Within this changing data landscape, African NSOs, like those in many other parts of the world, are confronted with a new data stewardship role. This will add further pressure on the capacity of NSOs, and presents additional challenges in terms of navigating issues of governance and use…
Recommendations as part of a six-point roadmap for bridging the data-policy map include:
Creating a statistical capacity strategy to raise funds
Connecting to knowledge banks to hire and retain talent
Building good narratives for better data use
Recognising the power of foundational data
Strengthening statistical laws to harness the data revolution
Encouraging data use in policy design and implementation…(More)”
Essay by Kevin Starr: “Systems change! Just saying the words aloud makes me feel like one of the cognoscenti, one of the elite who has transcended the ways of old-school philanthropy. Those two words capture our aspirations of lasting impact at scale: systems are big, and if you manage to change them, they’ll keep spinning out impact forever. Why would you want to do anything else?
There’s a problem, though. “Systems analysis” is an elegant and useful way to think about problems and get ideas for solutions, but “systems change” is accelerating toward buzzword purgatory. It’s so sexy that everyone wants to use it for everything. …
But when you rummage through the growing literature on systems change thinking, there are in fact a few recurring themes. One is the need to tackle the root causes of any problem you take on. Another is that a broad coalition must be assembled ASAP. Finally, the most salient theme is the notion that the systems involved are transformed as a result of the work (although in many of the examples I read about, it’s not articulated clearly just what system is being changed).
Taken individually or as a whole, these themes point to some of the ways in which systems change is a less-than-ideal paradigm for the work we need to get done:
1. It’s too hard to know to what degree systems change is or isn’t happening. It may be the case that “not everything that matters can be counted,” but most of the stuff that matters can, and it’s hard to get better at something if you’re unable to measure it. But these words of a so-called expert on systems change measurement are typical of what I’ve seen in in the literature: “Measuring systems change is about detecting patterns in the connections between the parts. It is about qualitative changes in the structure of the system, about its adaptiveness and resilience, about synergies emerging from collective efforts—and more…”
Like I said, it’s too hard to know to what is or isn’t happening.
2. “Root cause” thinking can—paradoxically—bog down progress. “Root cause” analysis is a common feature of most systems change discussions, and it’s a wonderful tool to generate ideas and avoid unintended consequences. However, broad efforts to tackle all of a problem’s root causes can turn anything into a complicated, hard-to-replicate project. It can also make things look so overwhelming as to result in a kind of paralysis. And however successful a systems change effort might be, that complication makes it hard to replicate, and you’re often stuck with a one-off project….(More)”.
Book by Dan Breznitz: “Across the world, cities and regions have wasted trillions of dollars blindly copying the Silicon Valley model of growth creation. We have lived with this system for decades, and the result is clear: a small number of regions and cities are at the top of the high-tech industry, but many more are fighting a losing battle to retain economic dynamism. But, as this books details, there are other models for innovation-based growth that don’t rely on a flourishing high-tech industry. Breznitz argues that the purveyors of the dominant ideas on innovation have a feeble understanding of the big picture on global production and innovation.
They conflate innovation with invention and suffer from techno-fetishism. In their devotion to start-ups, they refuse to admit that the real obstacle to growth for most cities is the overwhelming power of the real hubs, which siphon up vast amounts of talent and money. Communities waste time, money, and energy pursuing this road to nowhere. Instead, Breznitz proposes that communities focus on where they fit within the four stages in the global production process. Success lies in understanding the changed structure of the global system of production and then using those insights to enable communities to recognize their own advantages, which in turn allows to them to foster surprising forms of specialized innovation. All localities have certain advantages relative to at least one stage of the global production process, and the trick is in recognizing it….(More)”.
Blog By Hamed Alemohammad at Radiant Earth Foundation: “Labeling satellite imagery is the process of applying tags to scenes to provide context or confirm information. These labeled training datasets form the basis for machine learning (ML) algorithms. The labeling undertaking (in many cases) requires humans to meticulously and manually assign captions to the data, allowing the model to learn patterns and estimate them for other observations.
For a wide range of Earth observation applications, training data labels can be generated by annotating satellite imagery. Images can be classified to identify the entire image as a class (e.g., water body) or for specific objects within the satellite image. However, annotation tasks can only identify features observable in the imagery. For example, with Sentinel-2 imagery at the 10-meter spatial resolution, one cannot detect the more detailed features of interest, such as crop types but would be able to distinguish large croplands from other land cover classes.
Human error in labeling is inevitable and results in uncertainties and errors in the final label. As a result, it’s best practice to examine images multiple times and then assign a majority or consensus label. In general, significant human resources and financial investment is needed to annotate imagery at large scales.
In 2018, we identified the need for a geographically diverse land cover classification training dataset that required human annotation and validation of labels. We proposed to Schmidt Futures a project to generate such a dataset to advance land cover classification globally. In this blog post, we discuss what we’ve learned developing LandCoverNet, including the keys to generating good quality labels in a socially responsible manner….(More)”.
Martin Brandt and Kjeld Rasmussen in The Conversation: “The possibility that vegetation cover in semi-arid and arid areas was retreating has long been an issue of international concern. In the 1930s it was first theorized that the Sahara was expanding and woody vegetation was on the retreat. In the 1970s, spurred by the “Sahel drought”, focus was on the threat of “desertification”, caused by human overuse and/or climate change. In recent decades, the potential impact of climate change on the vegetation has been the main concern, along with the feedback of vegetation on the climate, associated with the role of the vegetation in the global carbon cycle.
Using high-resolution satellite data and machine-learning techniques at supercomputing facilities, we have now been able to map billions of individual trees and shrubs in West Africa. The goal is to better understand the real state of vegetation coverage and evolution in arid and semi-arid areas.
Finding a shrub in the desert – from space
Since the 1970s, satellite data have been used extensively to map and monitor vegetation in semi-arid areas worldwide. Images are available in “high” spatial resolution (with NASA’s satellites Landsat MSS and TM, and ESA’s satellites Spot and Sentinel) and “medium or low” spatial resolution (NOAA AVHRR and MODIS).
To accurately analyse vegetation cover at continental or global scale, it is necessary to use the highest-resolution images available – with a resolution of 1 metre or less – and up until now the costs of acquiring and analysing the data have been prohibitive. Consequently, most studies have relied on moderate- to low-resolution data. This has not allowed for the identification of individual trees, and therefore these studies only yield aggregate estimates of vegetation cover and productivity, mixing herbaceous and woody vegetation.
In a new study covering a large part of the semi-arid Sahara-Sahel-Sudanian zone of West Africa, published in Nature in October 2020, an international group of researchers was able to overcome these limitations. By combining an immense amount of high-resolution satellite data, advanced computing capacities, machine-learning techniques and extensive field data gathered over decades, we were able to identify individual trees and shrubs with a crown area of more than 3 m2 with great accuracy. The result is a database of 1.8 billion trees in the region studied, available to all interested….(More)”
Paper by Keren Weitzberg et al: “Identification technologies like biometrics have long been associated with securitisation, coercion and surveillance but have also, in recent years, become constitutive of a politics of empowerment, particularly in contexts of international aid. Aid organisations tend to see digital identification technologies as tools of recognition and inclusion rather than oppressive forms of monitoring, tracking and top-down control. In addition, practices that many critical scholars describe as aiding surveillance are often experienced differently by humanitarian subjects. This commentary examines the fraught questions this raises for scholars of international aid, surveillance studies and critical data studies. We put forward a research agenda that tackles head-on how critical theories of data and society can better account for the ambivalent dynamics of ‘power over’ and ‘power to’ that digital aid interventions instantiate….(More)”.
Report by the World Bank: “Data has become ubiquitous—with global data flows increasing one thousand times over the last 20 years. What is not always appreciated is the extent to which data offer the potential to improve people’s lives, including the poor and those living in lower-income countries.
Consider this example. The Indian state of Odisha is susceptible to devastating cyclones. When disaster struck in 1999, as many as 10,000 people lost their lives. This tragedy prompted the Odisha State Disaster Management Authority to invest heavily in weather forecast data. When another, similarly powerful storm struck in 2013, the capture and broadcast of early warning data allowed nearly one million people to be evacuated to safety, slashing the death toll to just 38.
Data’s direct benefits on lives and livelihoods can come not only from government initiatives, as in Odisha, but also through a plethora of new private business models. Many of us are familiar with on-demand ride-hailing platforms that have revolutionized public transportation in major cities. In Nigeria, the platform business Hello Tractor has adapted the concept of a ride-hailing platform allowing farmers to rent agricultural equipment on demand and increase their agricultural productivity.
Furthermore, Civil Society Organizations across the world are using crowdsourced data collected from citizens as a way of holding governments accountable. For example, the platform ForestWatchers allows people to directly report deforestation of the Amazon. And in Egypt, the HarrassMap tool allows women to report the location of sexual harassment incidents.
Despite all these innovative uses, data still remain grossly under-utilized, leaving much of the economic and social value of data untapped. Collecting and using data for a single purpose without making it available to others for reuse is a waste of resources. By reusing and combining data from both public and private sources, and applying modern analytical techniques, merged data sets can cover more people, more precisely, and more frequently. Leveraging these data synergies can bring real benefits….(More)”.
17 Rooms aims to advance problem-solving within and across all the SDGs. As a partnership between Brookings and The Rockefeller Foundation, the first version of the undertaking was convened in September 2018, as a single meeting on the eve of the U.N. General Assembly in New York. The initiative has since evolved into a two-pronged effort: an annual flagship process focused on global-scale policy issues and a community-level process in which local actors are taking 17 Rooms methods into their own hands.
In practical terms, 17 Rooms consists of participants from disparate specialist communities each meeting in their own “Rooms,” or working groups, one for each SDG. Each Room is tasked with a common assignment of identifying cooperative actions they can take over the subsequent 12-18 months. Emerging ideas are then shared across Rooms to spot opportunities for collaboration.
The initiative continues to evolve through ongoing experimentation, so methods are not overly fixed, but three design principles help define key elements of the 17 Rooms mindset:
All SDGs get a seat at the table. Insights, participants, and priorities are valued equally across all the specialist communities focused on individual dimensions of the SDGs
Take a next step, not the perfect step. The process encourages participants to identify—and collaborate on—actions that are “big enough to matter, but small enough to get done”
Conversations, not presentations. Discussions are structured around collaboration and peer-learning, aiming to focus on what’s best for an issue, not any individual organization
These principles appear to contribute to three distinct forms of value: the advancement of action, the generation of insights, and a strengthened sense of community among participants….(More)”.