How Blockchain Technology Is Helping Syrian Refugees


Siobhan Kenna at the Huffpost: “Azraq Refugee Camp is a 15 kilometre-wide sea of corrugated aluminium houses in the heart of the vast Jordanian desert. The people that live there are detained by the barbed wire that surrounds the entire complex which is located an hour and a half from the country’s capital city, Amman….

From within the strange environment of the camp and the indistinct future, lies a bastion of normalcy for these people — the supermarket.

In the refugee camp the supermarket is much more than a place to shop or purchase food though: Here it is a vital fibre in the social fabric of a makeshift community….

It’s unbelievable to think then, that a place that is so remote and isolated could be home to a world first initiative involving the emerging Blockchain technology.

The Building Blocks Project is the brain child of Houman Haddad, Regional CBT Advisor for United Nations World Food Programme (WFP). The project aims to make cash-based transactions between the WFP and the beneficiary faster, cheaper and more secure.

Prior to the project’s launch at the Azraq Refugee Camp in Jordan in May 2017, it was first trialled in Pakistan and also in King Abduallah Park Refugee Camp as a means of testing the robustness of the technology. On May 31st 2017 the pilot in Azraq was extended indefinitely.

Traditionally, payments are made to refugees from the WFP via a third party financial service provider. The entity could be a bank, mobile monetary company or something similar and the WFP instructs the financial service provider to credit some of the funds to the refugee so they can spend it at the supermarket or elsewhere.

On top of that, the WFP also needs to transfer the funds to the third party so they can actually pay the beneficiary. Sounds complicated right? Well, the Building Blocks Project aims to eliminate reliance on a third party and with this comes plenty of savings.

“So, what we have done is essentially replaced that financial service provider with the Blockchain,” Houman Haddad told HuffPost Australia.

“So instead of having someone else create virtual accounts and credit functions and so on and so forth, we create the virtual account on the Blockchain for beneficiaries, we upload entitlements to them, and currently in the supermarket where they go, the supermarket requests an authorisation code for transactions from the Blockchain as opposed to the bank….(More)”.

India Social: How Social Media Is Leading The Charge And Changing The Country


Book excerpt of Ankit Lal’s book ‘India Social’: on “How social media showed its unique power of crowdsourcing during the Chennai floods…

One ingenious resource that was circulated widely during the floods was a crowdsourced effort that mapped inundated roads in the city. Over 2,500 flooded roads were added to the city’s map via social media, which was put together by engineer and information designer, Arun Ganesh.

The Chennai floods were a superb example of the power of collective effort. Users across social media channels came together to offer shelter, food, transport, and even a place for people to charge their phones. SOS messages asking ground teams to rescue stranded family members also went back and forth, and there were many who offered their homes and offices to those who were stranded.

Perhaps the most simple yet effective tool during the floods was the website chennairains.org.

It began as a simple Google spreadsheet. Sowmya Rao was trying to help her uncle and aunt figure out whether it was safe to stay in their house in suburban Chennai or move to a friend’s place. When she found out that the area they lived in was under severe risk of flooding, she relayed the message to them. But she felt helpless about the countless others who were facing the same plight as her relatives. Acting on a suggestion by another Twitter user, she created the Google spreadsheet that went on to become the website chennairains.org.

The idea was simple: crowdsource details about those who could offer shelter, and pass it on to those who were tweeting about rising waters. A hastily put-together spreadsheet soon blossomed into a multi-faceted, volunteer-driven, highly energetic online movement to help Chennai, and ended up being used by the general public, police officers, government officials and celebrities alike….(More)”.

Open Data in Developing Economies: Toward Building an Evidence Base on What Works and How


New book by Stefaan Verhulst and Andrew Young: “Recent years have witnessed considerable speculation about the potential of open data to bring about wide-scale transformation. The bulk of existing evidence about the impact of open data, however, focuses on high-income countries. Much less is known about open data’s role and value in low- and middle-income countries, and more generally about its possible contributions to economic and social development.

Open Data in Developing Economies features in-depth case studies on how open data is having an impact across Screen Shot 2017-11-14 at 5.41.30 AMthe developing world-from an agriculture initiative in Colombia to data-driven healthcare
projects in Uganda and South Africa to crisis response in Nepal. The analysis built on these case studies aims to create actionable intelligence regarding:

(a) the conditions under which open data is most (and least) effective in development, presented in the form of a Periodic Table of Open Data;

(b) strategies to maximize the positive contributions of open data to development; and

(c) the means for limiting open data’s harms on developing countries.

Endorsements:

“An empirically grounded assessment that helps us move beyond the hype that greater access to information can improve the lives of people and outlines the enabling factors for open data to be leveraged for development.”-Ania Calderon, Executive Director, International Open Data Charter

“This book is compulsory reading for practitioners, researchers and decision-makers exploring how to harness open data for achieving development outcomes. In an intuitive and compelling way, it provides valuable recommendations and critical reflections to anyone working to share the benefits of an increasingly networked and data-driven society.”-Fernando Perini, Coordinator of the Open Data for Development (OD4D) Network, International Development Research Centre, Canada

Download full-text PDF – See also: http://odimpact.org/

Somaliland’s voting technology shows how Africa can lead the world


Calestous Juma in The Conversation: “Africa has become a testing ground for technological leapfrogging. This is a process that involves skipping stages and moving rapidly to the frontiers of innovation.

Technological leapfrogging in Africa has, so far, focused on economic transformation and the improvement of basic services. Drones are a good example: they’re used in the continent’s health services and in agriculture. In South Africa, robots play a crucial role in mining.

Now, in a remarkable extension of technological leapfrogging, Somaliland has become the first country in the world to use iris recognition in a presidential election. This means that a breakaway republic seeking international recognition will have the world’s most sophisticated voting register.

Democracy and tech in Africa

Somaliland’s shift to such advanced voting technology emerged from a lack of trust because of problems with the 2008 elections. For instance, names were duplicated in the voter register because of pressure from local elders. These fraudulent activities and other logistical issues threatened to undermine Somaliland’s good standing in the international community.

Of course, Somaliland is not the only country in Africa to experience problems with its election processes. Others, like Kenya, have also turned to technology to try and deal with their challenges. This is important. Being able to hold free, fair and credible elections is critical in democratic transitions. The lack of trust in the electoral process remains a key source of political tension and violence.

Technology can help – and Somaliland is set to become a regional powerhouse in the production and deployment of the technological know-how that underpins electronic voting.

So how did Somaliland reach this point? And what lessons do its experiences hold for other countries?…(More)”.

Crowded Cities


Crowded Cities: “In the Netherlands every year more than 6 billion cigarette filters are tossed onto the street. It’s easy to toss, but it’s not easy to pick them up. Since each filter takes 12 years to degrade we realised it’s time to take action.

Through observation we concluded crows are the smartest around us to reach any spot in the city. What if crows can bring cigarette filters to one of our Crowbars to exchange the filter for food? This is how our adventure started.

The Crowbar

Cigarette filters, you find them in the park next to you in the grass, in dirty ditches and under your shoes. What if we could find a way to collect these butts from all corners of our city and precious parks? With crows, that have become perfectly adapted to city life, we can! By training crows to recognize and pick up cigarette filters we can solve this tenacious problem of city pollution. It is the Crowbar that does the training for us and gives out food as a reward….(More)”.

The UN is using ethereum’s technology to fund food for thousands of refugees


Joon Ian Wong at Quartz: “The United Nations agency in charge of food aid—often billed as the largest aid organization in the world—is betting that an ethereum-based blockchain technology could be the key to delivering aid efficiently to refugees while slashing the costs of doing so.

The agency, known as the World Food Programme (WFP), is the rare example of an organization that has delivered tangible results from its blockchain experiments—unlike the big banks that have experimented with the technology for years.

The WFP says it has transferred $1.4 million in food vouchers to 10,500 Syrian refugees in Jordan since May, and it plans to expand. “We need to bring the project from the current capacity to many, many, more,” says Houman Haddad, the WFP executive leading the project. “By that I mean 1 million transactions per day.”

Haddad, in Mexico to speak at the Ethereum Foundation’s annual developer conference, hopes to expand the UN project, called Building Blocks, from providing payment vouchers for one camp to providing vouchers for four camps, covering 100,000 people, by next January. He hopes to attract developers and partners to the UN project from his conference appearance, organized by the foundation, which acts as a steward for the technical development of the ethereum protocol….

The problem of internal bureaucratic warfare, of course, isn’t limited to the UN. Paul Currion, who co-founded Disberse, another blockchain-based aid delivery platform, lauds the speediness of the WFP effort. “It’s fantastic for proving this can work in the field,” he says. But “we’ve found that the hard work is integrating blockchain technology into existing organizational processes—we can’t just hand people a ticket and expect them to get on the high-speed blockchain train; we also need to drive with them to the station,” he says….(More)”.

 

Linux Foundation Debuts Community Data License Agreement


Press Release: “The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration, today announced the Community Data License Agreement(CDLA) family of open data agreements. In an era of expansive and often underused data, the CDLA licenses are an effort to define a licensing framework to support collaborative communities built around curating and sharing “open” data.

Inspired by the collaborative software development models of open source software, the CDLA licenses are designed to enable individuals and organizations of all types to share data as easily as they currently share open source software code. Soundly drafted licensing models can help people form communities to assemble, curate and maintain vast amounts of data, measured in petabytes and exabytes, to bring new value to communities of all types, to build new business opportunities and to power new applications that promise to enhance safety and services.

The growth of big data analytics, machine learning and artificial intelligence (AI) technologies has allowed people to extract unprecedented levels of insight from data. Now the challenge is to assemble the critical mass of data for those tools to analyze. The CDLA licenses are designed to help governments, academic institutions, businesses and other organizations open up and share data, with the goal of creating communities that curate and share data openly.

For instance, if automakers, suppliers and civil infrastructure services can share data, they may be able to improve safety, decrease energy consumption and improve predictive maintenance. Self-driving cars are heavily dependent on AI systems for navigation, and need massive volumes of data to function properly. Once on the road, they can generate nearly a gigabyte of data every second. For the average car, that means two petabytes of sensor, audio, video and other data each year.

Similarly, climate modeling can integrate measurements captured by government agencies with simulation data from other organizations and then use machine learning systems to look for patterns in the information. It’s estimated that a single model can yield a petabyte of data, a volume that challenges standard computer algorithms, but is useful for machine learning systems. This knowledge may help improve agriculture or aid in studying extreme weather patterns.

And if government agencies share aggregated data on building permits, school enrollment figures, sewer and water usage, their citizens benefit from the ability of commercial entities to anticipate their future needs and respond with infrastructure and facilities that arrive in anticipation of citizens’ demands.

“An open data license is essential for the frictionless sharing of the data that powers both critical technologies and societal benefits,” said Jim Zemlin, Executive Director of The Linux Foundation. “The success of open source software provides a powerful example of what can be accomplished when people come together around a resource and advance it for the common good. The CDLA licenses are a key step in that direction and will encourage the continued growth of applications and infrastructure.”…(More)”.

How “Big Data” Went Bust


The problem with “big data” is not that data is bad. It’s not even that big data is bad: Applied carefully, massive data sets can reveal important trends that would otherwise go undetected. It’s the fetishization of data, and its uncritical use, that tends to lead to disaster, as Julia Rose West recently wrote for Slate. And that’s what “big data,” as a catchphrase, came to represent.

By its nature, big data is hard to interpret. When you’re collecting billions of data points—clicks or cursor positions on a website; turns of a turnstile in a large public space; hourly wind speed observations from around the world; tweets—the provenance of any given data point is obscured. This in turn means that seemingly high-level trends might turn out to be artifacts of problems in the data or methodology at the most granular level possible. But perhaps the bigger problem is that the data you have are usually only a proxy for what you really want to know. Big data doesn’t solve that problem—it magnifies it….

Aside from swearing off data and reverting to anecdote and intuition, there are at least two viable ways to deal with the problems that arise from the imperfect relationship between a data set and the real-world outcome you’re trying to measure or predict.

One is, in short: moar data. This has long been Facebook’s approach. When it became apparent that users’ “likes” were a flawed proxy for what they actually wanted to see more of in their feeds, the company responded by adding more and more proxies to its model. It began measuring other things, like the amount of time they spent looking at a post in their feed, the amount of time they spent reading a story they had clicked on, and whether they hit “like” before or after they had read the piece. When Facebook’s engineers had gone as far as they could in weighting and optimizing those metrics, they found that users were still unsatisfied in important ways. So the company added yet more metrics to the sauce: It started running huge user-survey panels, added new reaction emojis by which users could convey more nuanced sentiments, and started using A.I. to detect clickbait-y language in posts by pages and publishers. The company knows none of these proxies are perfect. But by constantly adding more of them to the mix, it can theoretically edge ever closer to an algorithm that delivers to users the posts that they most want to see.

One downside of the moar data approach is that it’s hard and expensive. Another is that the more variables are added to your model, the more complex, opaque, and unintelligible its methodology becomes. This is part of the problem Pasquale articulated in The Black Box Society. Even the most sophisticated algorithm, drawing on the best data sets, can go awry—and when it does, diagnosing the problem can be nigh-impossible. There are also the perils of “overfitting” and false confidence: The more sophisticated your model becomes, the more perfectly it seems to match up with all your past observations, and the more faith you place in it, the greater the danger that it will eventually fail you in a dramatic way. (Think mortgage crisis, election prediction models, and Zynga.)

Another possible response to the problems that arise from biases in big data sets is what some have taken to calling “small data.” Small data refers to data sets that are simple enough to be analyzed and interpreted directly by humans, without recourse to supercomputers or Hadoop jobs. Like “slow food,” the term arose as a conscious reaction to the prevalence of its opposite….(More)”

 

Priceless? A new framework for estimating the cost of open government reforms


New paper by Praneetha Vissapragada and Naomi Joswiak: “The Open Government Costing initiative, seeded with funding from the World Bank, was undertaken to develop a practical and actionable approach to pinpointing the full economic costs of various open government programs. The methodology developed through this initiative represents an important step towards conducting more sophisticated cost-benefit analyses – and ultimately understanding the true value – of open government reforms intended to increase citizen engagement, promote transparency and accountability, and combat corruption, insights that have been sorely lacking in the open government community to date. The Open Government Costing Framework and Methods section (Section 2 of this report) outlines the critical components needed to conduct cost analysis of open government programs, with the ultimate objective of putting a price tag on key open government reform programs in various countries at a particular point in time. This framework introduces a costing process that employs six essential steps for conducting a cost study, including (1) defining the scope of the program, (2) identifying types of costs to assess, (3) developing a framework for costing, (4) identifying key components, (5) conducting data collection and (6) conducting data analysis. While the costing methods are built on related approaches used for analysis in other sectors such as health and nutrition, this framework and methodology was specifically adapted for open government programs and thus addresses the unique challenges associated with these types of initiatives. Using the methods outlined in this document, we conducted a cost analysis of two case studies: (1) ProZorro, an e-procurement program in Ukraine; and (2) Sierra Leone’s Open Data Program….(More)”

When Cartography Meets Disaster Relief


Mimi Kirk at CityLab: “Almost three weeks after Hurricane Maria hit Puerto Rico, the island is in a grim state. Fewer than 15 percent of residents have power, and much of the island has no clean drinking water. Delivery of food and other necessities, especially to remote areas, has been hampered by a variety of ills, including a lack of cellular service, washed-out roads, additional rainfall, and what analysts and Puerto Ricans say is a slow and insufficient response from the U.S. government.

Another issue slowing recovery? Maps—or lack of them. While pre-Maria maps of Puerto Rico were fairly complete, their level of detail was nowhere near that of other parts of the United States. Platforms such as Google Maps are more comprehensive on the mainland than on the island, explains Juan Saldarriaga, a research scholar at the Center for Spatial Research at Columbia University. This is because companies like Google often create maps for financial reasons, selling them to advertisers or as navigation devices, so areas that have less economic activity are given less attention.

This lack of detail impedes recovery efforts: Without basic information on the location of buildings, for instance, rescue workers don’t know how many people were living in an area before the hurricane struck—and thus how much aid is needed.

Crowdsourced mapping can help. Saldarriaga recently organized a “mapathon” at Columbia, in which volunteers examined satellite imagery of Puerto Rico and added missing buildings, roads, bridges, and other landmarks in the open-source platform OpenStreetMap. While some universities and other groups are hosting similar events, anyone with an internet connection and computer can participate.

Saldarriaga and his co-organizers collaborated with Humanitarian OpenStreetMap Team (HOT), a nonprofit that works to create crowdsourced maps for aid and development work. Volunteers like Saldarriaga largely drive HOT’s “crisis mapping” projects, the first of which occurred in 2010 after Haiti’s earthquake…(More)”.