Drones to deliver medicines to 12m people in Ghana


Neil Munshi in the Financial Times: “The world’s largest drone delivery network, ferrying 150 different medicines and vaccines, as well as blood, to 2,000 clinics in remote parts of Ghana, is set to be announced on Wednesday.

The network represents a big expansion for the Silicon Valley start-up Zipline, which began delivering blood in Rwanda in 2016 using pilotless, preprogrammed aircraft. The move, along with a new agreement in Rwanda signed in December, takes the company beyond simple blood distribution to more complicated vaccine and plasma deliveries.

“What this is going to show is that you can reach every GPS co-ordinate, you can serve everybody,” said Keller Rinaudo, Zipline chief executive. “Every human in that region or country [can be] within a 15-25 minute delivery of any essential medical product — it’s a different way of thinking about universal coverage.”

Zipline will deliver vaccines for yellow fever, polio, diptheria and tetanus which are provided by the World Health Organisation’s Expanded Project on Immunisation. The WHO will also use the company’s system for future mass immunisation programmes in Ghana.

Later this year, Zipline has plans to start operations in the US, in North Carolina, and in south-east Asia. The company said it will be able to serve 100m people within a year, up from the 22m that its projects in Ghana and Rwanda will cover.

In Ghana, Zipline said health workers will receive deliveries via a parachute drop within about 30 minutes of placing their orders by text message….(More)”.

Five myths about whistleblowers


Dana Gold in the Washington Post: “When a whistleblower revealed the Trump administration’s decision to overturn 25 security clearance denials, it was the latest in a long and storied history of insiders exposing significant abuses of public trust. Whistles were blown on U.S. involvement in Vietnam, the Watergate coverupEnron’s financial fraud, the National Security Agency’s mass surveillance of domestic electronic communications and, during the Trump administration, the corruption of former Environmental Protection Agency chief Scott Pruitt , Cambridge Analytica’s theft of Facebook users’ data to develop targeted political ads, and harm to children posed by the “zero tolerance” immigration policy. Despite the essential role whistleblowers play in illuminating the truth and protecting the public interest, several myths persist about them, some pernicious.

MYTH NO. 1 Whistleblowers are employees who report problems externally….

MYTH NO. 2 Whistleblowers are either disloyal or heroes….

MYTH NO. 3 ‘Leaker’ is another term for ‘whistleblower.’…

MYTH NO. 4 Remaining anonymous is the best strategy for whistleblowing….

MYTH NO. 5 Julian Assange is a whistleblower….(More)”.

Renovating Democracy: Governing in the Age of Globalization and Digital Capitalism


Book by Nathan Gardels and Nicolas Berggruen: “The rise of populism in the West and the rise of China in the East have stirred a rethinking of how democratic systems work—and how they fail. The impact of globalism and digital capitalism is forcing worldwide attention to the starker divide between the “haves” and the “have-nots,” challenging how we think about the social contract.

With fierce clarity and conviction, Renovating Democracy tears down our basic structures and challenges us to conceive of an alternative framework for governance. To truly renovate our global systems, the authors argue for empowering participation without populism by integrating social networks and direct democracy into the system with new mediating institutions that complement representative government. They outline steps to reconfigure the social contract to protect workers instead of jobs, shifting from a “redistribution” after wealth to “pre-distribution” with the aim to enhance the skills and assets of those less well-off. Lastly, they argue for harnessing globalization through “positive nationalism” at home while advocating for global cooperation—specifically with a partnership with China—to create a viable rules-based world order. 

Thought provoking and persuasive, Renovating Democracy serves as a point of departure that deepens and expands the discourse for positive change in governance….(More)”.

Black Wave: How Networks and Governance Shaped Japan’s 3/11 Disasters


Book by Daniel Aldrich: “Despite the devastation caused by the magnitude 9.0 earthquake and 60-foot tsunami that struck Japan in 2011, some 96% of those living and working in the most disaster-stricken region of Tōhoku made it through. Smaller earthquakes and tsunamis have killed far more people in nearby China and India. What accounts for the exceptionally high survival rate? And why is it that some towns and cities in the Tōhoku region have built back more quickly than others?

Black Wave illuminates two critical factors that had a direct influence on why survival rates varied so much across the Tōhoku region following the 3/11 disasters and why the rebuilding process has also not moved in lockstep across the region. Individuals and communities with stronger networks and better governance, Daniel P. Aldrich shows, had higher survival rates and accelerated recoveries. Less connected communities with fewer such ties faced harder recovery processes and lower survival rates. Beyond the individual and neighborhood levels of survival and recovery, the rebuilding process has varied greatly, as some towns and cities have sought to work independently on rebuilding plans, ignoring recommendations from the national governments and moving quickly to institute their own visions, while others have followed the guidelines offered by Tokyo-based bureaucrats for economic development and rebuilding….(More)”.

Crowdsourced reports could save lives when the next earthquake hits


Charlotte Jee at MIT Technology Review: “When it comes to earthquakes, every minute counts. Knowing that one has hit—and where—can make the difference between staying inside a building and getting crushed, and running out and staying alive. This kind of timely information can also be vital to first responders.

However, the speed of early warning systems varies from country to country. In Japan  and California, huge networks of sensors and seismic stations can alert citizens to an earthquake. But these networks are expensive to install and maintain. Earthquake-prone countries such as Mexico and Indonesia don’t have such an advanced or widespread system.

A cheap, effective way to help close this gap between countries might be to crowdsource earthquake reports and combine them with traditional detection data from seismic monitoring stations. The approach was described in a paper in Science Advances today.

The crowdsourced reports come from three sources: people submitting information using LastQuake, an app created by the Euro-Mediterranean Seismological Centre; tweets that refer to earthquake-related keywords; and the time and IP address data associated with visits to the EMSC website.

When this method was applied retrospectively to earthquakes that occurred in 2016 and 2017, the crowdsourced detections on their own were 85% accurate. Combining the technique with traditional seismic data raised accuracy to 97%. The crowdsourced system was faster, too. Around 50% of the earthquake locations were found in less than two minutes, a whole minute faster than with data provided only by a traditional seismic network.

When EMSC has identified a suspected earthquake, it sends out alerts via its LastQuake app asking users nearby for more information: images, videos, descriptions of the level of tremors, and so on. This can help assess the level of damage for early responders….(More)”.

Data-driven models of governance across borders


Introduction to Special Issue of FirstMonday, edited by Payal Arora and Hallam Stevens: “This special issue looks closely at contemporary data systems in diverse global contexts and through this set of papers, highlights the struggles we face as we negotiate efficiency and innovation with universal human rights and social inclusion. The studies presented in these essays are situated in diverse models of policy-making, governance, and/or activism across borders. Attention to big data governance in western contexts has tended to highlight how data increases state and corporate surveillance of citizens, affecting rights to privacy. By moving beyond Euro-American borders — to places such as Africa, India, China, and Singapore — we show here how data regimes are motivated and understood on very different terms….

To establish a kind of baseline, the special issue opens by considering attitudes toward big data in Europe. René König’s essay examines the role of “citizen conferences” in understanding the public’s view of big data in Germany. These “participatory technology assessments” demonstrated that citizens were concerned about the control of big data (should it be under the control of the government or individuals?), about the need for more education about big data technologies, and the need for more government regulation. Participants expressed, in many ways, traditional liberal democratic views and concerns about these technologies centered on individual rights, individual responsibilities, and education. Their proposed solutions too — more education and more government regulation — fit squarely within western liberal democratic traditions.

In contrast to this, Payal Arora’s essay draws us immediately into the vastly different contexts of data governance in India and China. India’s Aadhaar biometric identification system, through tracking its citizens with iris scanning and other measures, promises to root out corruption and provide social services to those most in need. Likewise, China’s emerging “social credit system,” while having immense potential for increasing citizen surveillance, offers ways of increasing social trust and fostering more responsible social behavior online and offline. Although the potential for authoritarian abuses of both systems is high, Arora focuses on how these technologies are locally understood and lived on an everyday basis, which spans from empowering to oppressing their people. From this perspective, the technologies offer modes of “disrupt[ing] systems of inequality and oppression” that should open up new conversations about what democratic participation can and should look like in China and India.

If China and India offer contrasting non-democratic and democratic cases, we turn next to a context that is neither completely western nor completely non-western, neither completely democratic nor completely liberal. Hallam Stevens’ account of government data in Singapore suggests the very different role that data can play in this unique political and social context. Although the island state’s data.gov.sg participates in global discourses of sharing, “open data,” and transparency, much of the data made available by the government is oriented towards the solution of particular economic and social problems. Ultimately, the ways in which data are presented may contribute to entrenching — rather than undermining or transforming — existing forms of governance. The account of data and its meanings that is offered here once again challenges the notion that such data systems can or should be understood in the same ways that similar systems have been understood in the western world.

If systems such as Aadhaar, “social credit,” and data.gov.sg profess to make citizens and governments more visible and legible, Rolien Hoyngexamines what may remain invisible even within highly pervasive data-driven systems. In the world of e-waste, data-driven modes of surveillance and logistics are critical for recycling. But many blind spots remain. Hoyng’s account reminds us that despite the often-supposed all-seeing-ness of big data, we should remain attentive to what escapes the data’s gaze. Here, in midst of datafication, we find “invisibility, uncertainty, and, therewith, uncontrollability.” This points also to the gap between the fantasies of how data-driven systems are supposed to work, and their realization in the world. Such interstices allow individuals — those working with e-waste in Shenzhen or Africa, for example — to find and leverage hidden opportunities. From this perspective, the “blind spots of big data” take on a very different significance.

Big data systems provide opportunities for some, but reduce those for others. Mark Graham and Mohammad Amir Anwar examine what happens when online outsourcing platforms create a “planetary labor market.” Although providing opportunities for many people to make money via their Internet connection, Graham and Anwar’s interviews with workers across sub-Saharan Africa demonstrate how “platform work” alters the balance of power between labor and capital. For many low-wage workers across the globe, the platform- and data-driven planetary labor market means downward pressure on wages, fewer opportunities to collectively organize, less worker agency, and less transparency about the nature of the work itself. Moving beyond bold pronouncements that the “world is flat” and big data as empowering, Graham and Anwar show how data-driven systems of employment can act to reduce opportunities for those residing in the poorest parts of the world. The affordances of data and platforms create a planetary labor market for global capital but tie workers ever-more tightly to their own localities. Once again, the valances of global data systems look very different from this “bottom-up” perspective.

Philippa Metcalfe and Lina Dencik shift this conversation from the global movement of labor to that of people, as they write about the implications of European datafication systems on the governance of refugees entering this region. This work highlights how intrinsic to datafication systems is the classification, coding, and collating of people to legitimize the extent of their belonging in the society they seek to live in. The authors argue that these datafied regimes of power have substantively increased their role in the regulating of human mobility in the guise of national security. These means of data surveillance can foster new forms of containment and entrapment of entire groups of people, creating further divides between “us” and “them.” Through vast interoperable databases, digital registration processes, biometric data collection, and social media identity verification, refugees have become some of the most monitored groups at a global level while at the same time, their struggles remain the most invisible in popular discourse….(More)”.

Progression of the Inevitable


Kevin Kelly at Technium: “…The procession of technological discoveries is inevitable. When the conditions are right — when the necessary web of supporting technology needed for every invention is established — then the next adjacent technological step will emerge as if on cue. If inventor X does not produce it, inventor Y will. The invention of the microphone, the laser, the transistor, the steam turbine, the waterwheel, and the discoveries of oxygen, DNA, and Boolean logic, were all inevitable in roughly the period they appeared. However the particular form of the microphone, its exact circuit, or the specific design of the laser, or the particular materials of the transistor, or the dimensions of the steam turbine, or the peculiar notation of the formula, or the specifics of any invention are not inevitable. Rather they will vary quite widely due to the personality of their finder, the resources at hand, the culture of society they are born into, the economics funding the discovery, and the influence of luck and chance. An incandescent light bulb based on a coil of carbonized bamboo filament heated within a vacuum bulb is not inevitable, but “the electric incandescent light bulb” is. The concept of “the electric incandescent light bulb” abstracted from all the details that can vary while still producing the result — luminance from electricity, for instance  —  is ordained by the technium’s trajectory. We know this because “the electric incandescent light bulb” was invented, re-invented, co-invented, or “first invented” dozens of times. In their book “Edison’s Electric Light: Biography of an Invention”, Robert Friedel and Paul Israel list 23 inventors of incandescent bulbs prior to Edison. It might be fairer to say that Edison was the very last “first” inventor of the electric light.

Lightbulbs



Three independently invented electric light bulbs: Edison’s, Swan’s, and Maxim’s.

Any claim of inevitability is difficult to prove. Convincing proof requires re-running a progression more than once and showing that the outcome is the same each time. That no matter what perturbations thrown at the system, it yields an identical result. To claim that the large-scale trajectory of the technium is inevitable would mean demonstrating that if we re-ran history, the same abstracted inventions would arise again, and in roughly the same relative order.  Without a time machine, there’ll be no indisputable proof, but we do have three types of evidence that suggest that the paths of technologies are inevitable. They are 1) that quantifiable trajectories of progress don’t waver despite attempts to shift them (see my Moore’s Law); 2) that in ancient times when transcontinental communication was slow or null, we find independent timelines of technology in different continents converging upon a set order; and 3) the fact that most inventions and discoveries have been made independently by more than one person….(More)”.

Know-how: Big Data, AI and the peculiar dignity of tacit knowledge


Essay by Tim Rogan: “Machine learning – a kind of sub-field of artificial intelligence (AI) – is a means of training algorithms to discern empirical relationships within immense reams of data. Run a purpose-built algorithm by a pile of images of moles that might or might not be cancerous. Then show it images of diagnosed melanoma. Using analytical protocols modelled on the neurons of the human brain, in an iterative process of trial and error, the algorithm figures out how to discriminate between cancers and freckles. It can approximate its answers with a specified and steadily increasing degree of certainty, reaching levels of accuracy that surpass human specialists. Similar processes that refine algorithms to recognise or discover patterns in reams of data are now running right across the global economy: medicine, law, tax collection, marketing and research science are among the domains affected. Welcome to the future, say the economist Erik Brynjolfsson and the computer scientist Tom Mitchell: machine learning is about to transform our lives in something like the way that steam engines and then electricity did in the 19th and 20th centuries. 

Signs of this impending change can still be hard to see. Productivity statistics, for instance, remain worryingly unaffected. This lag is consistent with earlier episodes of the advent of new ‘general purpose technologies’. In past cases, technological innovation took decades to prove transformative. But ideas often move ahead of social and political change. Some of the ways in which machine learning might upend the status quo are already becoming apparent in political economy debates.

The discipline of political economy was created to make sense of a world set spinning by steam-powered and then electric industrialisation. Its central question became how best to regulate economic activity. Centralised control by government or industry, or market freedoms – which optimised outcomes? By the end of the 20th century, the answer seemed, emphatically, to be market-based order. But the advent of machine learning is reopening the state vs market debate. Which between state, firm or market is the better means of coordinating supply and demand? Old answers to that question are coming under new scrutiny. In an eye-catching paper in 2017, the economists Binbin Wang and Xiaoyan Li at Sichuan University in China argued that big data and machine learning give centralised planning a new lease of life. The notion that market coordination of supply and demand encompassed more information than any single intelligence could handle would soon be proved false by 21st-century AI.

How seriously should we take such speculations? Might machine learning bring us full-circle in the history of economic thought, to where measures of economic centralisation and control – condemned long ago as dangerous utopian schemes – return, boasting new levels of efficiency, to constitute a new orthodoxy?

A great deal turns on the status of tacit knowledge….(More)”.

Data: The Lever to Promote Innovation in the EU


Blog Post by Juan Murillo Arias: “…But in order for data to truly become a lever that foments innovation in benefit of society as a whole, we must understand and address the following factors:

1. Disconnected, disperse sources. As users of digital services (transportation, finance, telecommunications, news or entertainment) we leave a different digital footprint for each service that we use. These footprints, which are different facets of the same polyhedron, can even be contradictory on occasion. For this reason, they must be seen as complementary. Analysts should be aware that they must cross data sources from different origins in order to create a reliable picture of our preferences, otherwise we will be basing decisions on partial or biased information. How many times do we receive advertising for items we have already purchased, or tourist destinations where we have already been? And this is just one example of digital marketing. When scoring financial solvency, or monitoring health, the more complete the digital picture is of the person, the more accurate the diagnosis will be.

Furthermore, from the user’s standpoint, proper management of their entire, disperse digital footprint is a challenge. Perhaps centralized consent would be very beneficial. In the financial world, the PSD2 regulations have already forced banks to open this information to other banks if customers so desire. Fostering competition and facilitating portability is the purpose, but this opening up has also enabled the development of new services of information aggregation that are very useful to financial services users. It would be ideal if this step of breaking down barriers and moving toward a more transparent market took place simultaneously in all sectors in order to avoid possible distortions to competition and by extension, consumer harm. Therefore, customer consent would open the door to building a more accurate picture of our preferences.

2. The public and private sectors’ asymmetric capacity to gather data.This is related to citizens using public services less frequently than private services in the new digital channels. However, governments could benefit from the information possessed by private companies. These anonymous, aggregated data can help to ensure a more dynamic public management. Even personal data could open the door to customized education or healthcare on an individual level. In order to analyze all of this, the European Commissionhas created a working group including 23 experts. The purpose is to come up with a series of recommendations regarding the best legal, technical and economic framework to encourage this information transfer across sectors.

3. The lack of incentives for companies and citizens to encourage the reuse of their data.The reality today is that most companies solely use the sources internally. Only a few have decided to explore data sharing through different models (for academic research or for the development of commercial services). As a result of this and other factors, the public sector largely continues using the survey method to gather information instead of reading the digital footprint citizens produce. Multiple studies have demonstrated that this digital footprint would be useful to describe socioeconomic dynamics and monitor the evolution of official statistical indicators. However, these studies have rarely gone on to become pilot projects due to the lack of incentives for a private company to open up to the public sector, or to society in general, making this new activity sustainable.

4. Limited commitment to the diversification of services.Another barrier is the fact that information based product development is somewhat removed from the type of services that the main data generators (telecommunications, banks, commerce, electricity, transportation, etc.) traditionally provide. Therefore, these data based initiatives are not part of their main business and are more closely tied to companies’ innovation areas where exploratory proofs of concept are often not consolidated as a new line of business.

5. Bidirectionality. Data should also flow from the public sector to the rest of society. The first regulatory framework was created for this purpose. Although it is still very recent (the PSI Directive on the re-use of public sector data was passed in 2013), it is currently being revised, in attempt to foster the consolidation of an open data ecosystem that emanates from the public sector as well. On the one hand it would enable greater transparency, and on the other, the development of solutions to improve multiple fields in which public actors are key, such as the environment, transportation and mobility, health, education, justice and the planning and execution of public works. Special emphasis will be placed on high value data sets, such as statistical or geospatial data — data with tremendous potential to accelerate the emergence of a wide variety of information based data products and services that add value.The Commission will begin working with the Member States to identify these data sets.

In its report, Creating Data through Open Data, the European open data portal estimates that government agencies making their data accessible will inject an extra €65 billion in the EU economy this year.

6. The commitment to analytical training and financial incentives for innovation.They are the key factors that have given rise to the digital unicorns that have emerged, more so in the U.S. and China than in Europe….(More)”

The global South is changing how knowledge is made, shared and used


Robert Morrell at The Conversation: “Globalisation and new technology have changed the ways that knowledge is made, disseminated and consumed. At the push of a button, one can find articles or sources from all over the world. Yet the global knowledge economy is still marked by its history.

The former colonial nations of the nineteenth and twentieth centuries – the rich countries of Europe and North America which are collectively called the global North (normally considered to include the West and the first world, the North contains a quarter of the world’s population but controls 80% of income earned) – are still central in the knowledge economy. But the story is not one simply of Northern dominance. A process of making knowledge in the South is underway.

European colonisers encountered many sophisticated and complex knowledge systems among the colonised. These had their own intellectual workforces, their own environmental, geographical, historical and medical sciences. They also had their own means of developing knowledge. Sometimes the colonisers tried to obliterate these knowledges.

In other instances colonisers appropriated local knowledge, for instance in agriculture, fisheries and mining. Sometimes they recognised and even honoured other knowledge systems and intellectuals. This was the case among some of the British in India, and was the early form of “Orientalism”, the study of people and cultures from the East.

In the past few decades, there’s been more critique of global knowledge inequalities and the global North’s dominance. There have also been shifts in knowledge production patterns; some newer disciplines have stepped away from old patterns of inequality.

These issues are examined in a new book, Knowledge and Global Power: Making new sciences in the South (published by Wits University Press), which I co-authored with Fran Collyer, Raewyn Connell and Joao Maia. The focus is especially on those areas where old patterns are not being replicated, so the study chooses climate change, gender and HIV and AIDS as three new areas of knowledge production in which new voices from the South might be prominent….(More)”.