Blockchain Ethical Design Framework


Report by Cara LaPointe and Lara Fishbane: “There are dramatic predictions about the potential of blockchain to “revolutionize” everything from worldwide financial markets and the distribution of humanitarian assistance to the very way that we outright recognize human identity for billions of people around the globe. Some dismiss these claims as excessive technology hype by citing flaws in the technology or robustness of incumbent solutions and infrastructure.

The reality will likely fall somewhere between these two extremes across multiple sectors. Where initial applications of blockchain were focused on the financial industry, current applications have rapidly expanded to address a wide array of sectors with major implications for social impact.

This paper aims to demonstrate the capacity of blockchain to create scalable social impact and to identify the elements that need to be addressed to mitigate challenges in its application. We are at a moment when technology is enabling society to experiment with new solutions and business models. Ubiquity and global reach, increased capabilities, and affordability have made technology a critical tool for solving problems, making this an exciting time to think about achieving greater social impact. We can address issues for underserved or marginalized people in ways that were previously unimaginable.

Blockchain is a technology that holds real promise for dealing with key inefficiencies and transforming operations in the social sector and for improving lives. Because of its immutability and decentralization, blockchain has the potential to create transparency, provide distributed verification, and build trust across multiple systems. For instance, blockchain applications could provide the means for establishing identities for individuals without identification papers, improving access to finance and banking services for underserved populations, and distributing aid to refugees in a more transparent and efficient manner. Similarly, national and subnational governments are putting land registry information onto blockchains to create greater transparency and avoid corruption and manipulation by third parties.

From increasing access to capital, to tracking health and education data across multiple generations, to improving voter records and voting systems, blockchain has countless potential applications for social impact. As developers take on building these types of solutions, the social effects of blockchain can be powerful and lasting. With the potential for such a powerful impact, the design, application, and approach to the development and implementation of blockchain technologies have long-term implications for society and individuals.

This paper outlines why intentionality of design, which is important with any technology, is particularly crucial with blockchain, and offers a framework to guide policymakers and social impact organizations. As social media, cryptocurrencies, and algorithms have shown, technology is not neutral. Values are embedded in the code. How the problem is defined and by whom, who is building the solution, how it gets programmed and implemented, who has access, and what rules are created have consequences, in intentional and unintentional ways. In the applications and implementation of blockchain, it is critical to understand that seemingly innocuous design choices have resounding ethical implications on people’s lives.

This white paper addresses why intentionality of design matters, identifies the key questions that should be asked, and provides a framework to approach use of blockchain, especially as it relates to social impact. It examines the key attributes of blockchain, its broad applicability as well as its particular potential for social impact, and the challenges in fully realizing that potential. Social impact organizations and policymakers have an obligation to understand the ethical approaches used in designing blockchain technology, especially how they affect marginalized and vulnerable populations….(More)”

My City Forecast: Urban planning communication tool for citizen with national open data


Paper by Y. Hasegawa, Y. Sekimoto, T. Seto, Y. Fukushima et al in Computers, Environment and Urban Systems: “In urban management, the importance of citizen participation is being emphasized more than ever before. This is especially true in countries where depopulation has become a major concern for urban managers and many local authorities are working on revising city master plans, often incorporating the concept of the “compact city.” In Japan, for example, the implementation of compact city plans means that each local government decides on how to designate residential areas and promotes citizens moving to these areas in order to improve budget effectiveness and the vitality of the city. However, implementing a compact city is possible in various ways. Given that there can be some designated withdrawal areas for budget savings, compact city policies can include disadvantages for citizens. At this turning point for urban structures, citizen–government mutual understanding and cooperation is necessary for every step of urban management, including planning.

Concurrently, along with the recent rapid growth of big data utilization and computer technologies, a new conception of cooperation between citizens and government has emerged. With emerging technologies based on civic knowledge, citizens have started to obtain the power to engage directly in urban management by obtaining information, thinking about their city’s problems, and taking action to help shape the future of their city themselves (Knight Foundation, 2013). This development is also supported by the open government data movement, which promotes the availability of government information online (Kingston, Carver, Evans, & Turton, 2000). CityDashboard is one well-known example of real-time visualization and distribution of urban information. CityDashboard, a web tool launched in 2012 by University College London, aggregates spatial data for cities around the UK and displays the data on a dashboard and a map. These new technologies are expected to enable both citizens and government to see their urban situation in an interface presenting an overhead view based on statistical information.

However, usage of statistics and governmental data is as yet limited in the actual process of urban planning…

To help improve this situation and increase citizen participation in urban management, we have developed a web-based urban planning communication tool using open government data for enhanced citizen–government cooperation. The main aim of the present research is to evaluate the effect of our system on users’ awareness of and attitude toward the urban situation. We have designed and developed an urban simulation system, My City Forecast (http://mycityforecast.net,) that enables citizens to understand how their environment and region are likely to change by urban management in the future (up to 2040)….(More)”.

Can Smart Cities Be Equitable?


Homi Kharas and Jaana Remes at Project Syndicate: “Around the world, governments are making cities “smarter” by using data and digital technology to build more efficient and livable urban environments. This makes sense: with urban populations growing and infrastructure under strain, smart cities will be better positioned to manage rapid change.

But as digital systems become more pervasive, there is a danger that inequality will deepen unless local governments recognize that tech-driven solutions are as important to the poor as they are to the affluent.

While offline populations can benefit from applications running in the background of daily life – such as intelligent signals that help with traffic flows – they will not have access to the full range of smart-city programs. With smartphones serving as the primary interface in the modern city, closing the digital divide, and extending access to networks and devices, is a critical first step.

City planners can also deploy technology in ways that make cities more inclusive for the poor, the disabled, the elderly, and other vulnerable people. Examples are already abundant.

In New York City, the Mayor’s Public Engagement Unit uses interagency data platforms to coordinate door-to-door outreachto residents in need of assistance. In California’s Santa Clara County, predictive analytics help prioritize shelter space for the homeless. On the London Underground, an app called Wayfindr uses Bluetooth to help visually impaired travelers navigate the Tube’s twisting pathways and escalators.

And in Kolkata, India, a Dublin-based startup called Addressing the Unaddressedhas used GPS to provide postal addresses for more than 120,000 slum dwellers in 14 informal communities. The goal is to give residents a legal means of obtaining biometric identification cards, essential documentation needed to access government services and register to vote.

But while these innovations are certainly significant, they are only a fraction of what is possible.

Public health is one area where small investments in technology can bring big benefits to marginalized groups. In the developing world, preventable illnesses comprise a disproportionate share of the disease burden. When data are used to identify demographic groups with elevated risk profiles, low-cost mobile-messaging campaigns can transmit vital prevention information. So-called “m-health” interventions on issues like vaccinations, safe sex, and pre- and post-natal care have been shown to improve health outcomes and lower health-care costs.

Another area ripe for innovation is the development of technologies that directly aid the elderly….(More)”.

Essentials of the Right of Access to Public Information: An Introduction


Introduction by Blanke, Hermann-Josef and Perlingeiro, Ricardo in the book “The Right of Access to Public Information : An International Comparative Legal Survey”: “The first freedom of information law was enacted in Sweden back in 1766 as the “Freedom of the Press and the Right of Access to Public Records Act”. It sets an example even today. However, the “triumph” of the freedom of information did not take place until much later. Many western legal systems arose from the American Freedom of Information Act, which was signed into law by President L.B. Johnson in 1966. This Act obliges all administrative authorities to provide information to citizens and imposes any necessary limitations. In an exemplary manner, it standardizes the objective of administrative control to protect citizens from government interference with their fundamental rights. Over 100 countries around the world have meanwhile implemented some form of freedom of information legislation. The importance of the right of access to information as an aspect of transparency and a condition for the rule of law and democracy is now also becoming apparent in international treaties at a regional level. This article provides an overview on the crucial elements and the guiding legal principles of transparency legislation, also by tracing back the lines of development of national and international case-law….(More)”.

Research Shows Political Acumen, Not Just Analytical Skills, is Key to Evidence-Informed Policymaking


Press Release: “Results for Development (R4D) has released a new study unpacking how evidence translators play a key and somewhat surprising role in ensuring policymakers have the evidence they need to make informed decisions. Translators — who can be evidence producers, policymakers, or intermediaries such as journalists, advocates and expert advisors — identify, filter, interpret, adapt, contextualize and communicate data and evidence for the purposes of policymaking.

The study, Translators’ Role in Evidence-Informed Policymaking, provides a better understanding of who translators are and how different factors influence translators’ ability to promote the use of evidence in policymaking. This research shows translation is an essential function and that, absent individuals or organizations taking up the translator role, evidence translation and evidence-informed policymaking often do not take place.

“We began this research assuming that translators’ technical skills and analytical prowess would prove to be among the most important factors in predicting when and how evidence made its way into public sector decision making,” Nathaniel Heller, executive vice president for integrated strategies at Results for Development, said. “Surprisingly, that turned out not to be the case, and other ‘soft’ skills play a far larger role in translators’ efficacy than we had imagined.”

Key findings include:

  • Translator credibility and reputation are crucial to the ability to gain access to policymakers and to promote the uptake of evidence.
  • Political savvy and stakeholder engagement are among the most critical skills for effective translators.
  • Conversely, analytical skills and the ability to adapt, transform and communicate evidence were identified as being less important stand-alone translator skills.
  • Evidence translation is most effective when initiated by those in power or when translators place those in power at the center of their efforts.

The study includes a definitional and theoretical framework as well as a set of research questions about key enabling and constraining factors that might affect evidence translators’ influence. It also focuses on two cases in Ghana and Argentina to validate and debunk some of the intellectual frameworks around policy translators that R4D and others in the field have already developed. The first case focuses on Ghana’s blue-ribbon commission formed by the country’s president in 2015, which was tasked with reviewing Ghana’s national health insurance scheme. The second case looks at Buenos Aires’ 2016 government-led review of the city’s right to information regime….(More)”.

Ontario is trying a wild experiment: Opening access to its residents’ health data


Dave Gershorn at Quartz: “The world’s most powerful technology companies have a vision for the future of healthcare. You’ll still go to your doctor’s office, sit in a waiting room, and explain your problem to someone in a white coat. But instead of relying solely on their own experience and knowledge, your doctor will consult an algorithm that’s been trained on the symptoms, diagnoses, and outcomes of millions of other patients. Instead of a radiologist reading your x-ray, a computer will be able to detect minute differences and instantly identify a tumor or lesion. Or at least that’s the goal.

AI systems like these, currently under development by companies including Google and IBM, can’t read textbooks and journals, attend lectures, and do rounds—they need millions of real life examples to understand all the different variations between one patient and another. In general, AI is only as good as the data it’s trained on, but medical data is exceedingly private—most developed countries have strict health data protection laws, such as HIPAA in the United States….

These approaches, which favor companies with considerable resources, are pretty much the only way to get large troves of health data in the US because the American health system is so disparate. Healthcare providers keep personal files on each of their patients, and can only transmit them to other accredited healthcare workers at the patient’s request. There’s no single place where all health data exists. It’s more secure, but less efficient for analysis and research.

Ontario, Canada, might have a solution, thanks to its single-payer healthcare system. All of Ontario’s health data exists in a few enormous caches under government control. (After all, the government needs to keep track of all the bills its paying.) Similar structures exist elsewhere in Canada, such as Quebec, but Toronto, which has become a major hub for AI research, wants to lead the charge in providing this data to businesses.

Until now, the only people allowed to study this data were government organizations or researchers who partnered with the government to study disease. But Ontario has now entrusted the MaRS Discovery District—a cross between a tech incubator and WeWork—to build a platform for approved companies and researchers to access this data, dubbed Project Spark. The project, initiated by MaRS and Canada’s University Health Network, began exploring how to share this data after both organizations expressed interest to the government about giving broader health data access to researchers and companies looking to build healthcare-related tools.

Project Spark’s goal is to create an API, or a way for developers to request information from the government’s data cache. This could be used to create an app for doctors to access the full medical history of a new patient. Ontarians could access their health records at any time through similar software, and catalog health issues as they occur. Or researchers, like the ones trying to build AI to assist doctors, could request a different level of access that provides anonymized data on Ontarians who meet certain criteria. If you wanted to study every Ontarian who had Alzheimer’s disease over the last 40 years, that data would only be authorization and a few lines of code away.

There are currently 100 companies lined up to get access to data, comprised of health records from Ontario’s 14 million residents. (MaRS won’t say who the companies are). …(More)”

AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.

Activating Agency or Nudging?


Article by Michael Walton: “Two ideas in development – activating agency of citizens and using “nudges” to change their behavior – seem diametrically opposed in spirit: activating latent agency at the ground level versus  top-down designs that exploit people’s behavioral responses. Yet both start from a psychological focus and a belief that changes in people’s behavior can lead to “better” outcomes, for the individuals involved and for society.  So how should we think of these contrasting sets of ideas? When should each approach be used?…

Let’s compare the two approaches with respect to diagnostic frame, practice and ethics.

Diagnostic frame.  

The common ground is recognition that people use short-cuts for decision-making, in ways that can hurt their own interests.  In both approaches, there is an emphasis that decision-making is particularly tough for poor people, given the sheer weight of daily problem-solving.  In behavioral economics one core idea is that we have limited mental “bandwidth” and this form of scarcity hampers decision-making. However, in the “agency” tradition, there is much more emphasis on unearthing and working with the origins of the prevailing mental models, with respect to social exclusion, stigmatization, and the typically unequal economic and cultural relations with respect to more powerful groups in a society.  One approach works more with symptoms, the other with root causes.

Implications for practice.  

The two approaches on display in Cerrito both concern social gains, and both involve a role for an external actor.  But here the contrast is sharp. In the “nudge” approach the external actor is a beneficent technocrat, trying out alternative offers to poor (or non-poor) people to improve outcomes.  A vivid example is alternative messages to tax payers in Guatemala, that induce varying improvements in tax payments.  In the “agency” approach the essence of the interaction is between a front-line worker and an individual or family, with a co-created diagnosis and plan, designed around goals and specific actions that the poor person chooses.  This is akin to what anthropologist Arjun Appadurai termed increasing the “capacity to aspire,” and can extend to greater engagement in civic and political life.

Ethics.

In both approaches, ethics is central.  As implicated in the “nudging for social good as opposed to electoral gain,” some form of ethical regulation is surely needed. In “action to activate agency,” the central ethical issue is of maintaining equality in design between activist and citizen, and explicit owning of any decisions.

What does this imply?

To some degree this is a question of domain of action.  Nudging is most appropriate in a program for which there is a fully supported political and social program, and the issue is how to make it work (as in paying taxes).  The agency approach has a broader ambition, but starts from domains that are potentially within an individual’s control once the sources of “ineffective” or inhibited behavior are tackled, including via front-line interactions with public or private actors….(More)”.

Do Delivery Units Deliver?: Assessing Government Innovations


Technical note by Lafuente, Mariano and González, Sebastián prepared as part of the Inter-American Development Bank’s (IDB) agenda on Center of Government: “… analyzes how delivery units (DU) have been adapted by Latin American and Caribbean governments, the degree to which they have contributed to meeting governments’ priority goals between 2007 and 2018, and the lessons learned along the way. The analysis, which draws lessons from 14 governments in the region, shows that the implementation of the DU model has varied as it has been tailored to each country’s context and that, under certain preconditions, has contributed to: (i) improved management using specific tools in contexts where institutional development is low; and (ii) attaining results that have a direct impact on citizens. The objective of this document is to serve as a guide for governments interested in applying similar management models as well as to set out an agenda for the future of DU in the region….(More)“.

New Technologies Won’t Reduce Scarcity, but Here’s Something That Might


Vasilis Kostakis and Andreas Roos at the Harvard Business Review: “In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with a competitive spirit. But Lieberman and Fry have some good news: modern technology can address the root of the problem. They believe that we compete when there is scarcity, and that recent technological advances, such as 3D printing and artificial intelligence, will end widespread scarcity. Thus, a post-scarcity world, premised on cooperation, would emerge.

But can we really end scarcity?

We believe that the post-scarcity vision of the future is problematic because it reflects an understanding of technology and the economy that could worsen the problems it seeks to address. This is the bad news. Here’s why:

New technologies come to consumers as finished products that can be exchanged for money. What consumers often don’t understand is that the monetary exchange hides the fact that many of these technologies exist at the expense of other humans and local environments elsewhere in the global economy….

The good news is that there are alternatives. The wide availability of networked computers has allowed new community-driven and open-source business models to emerge. For example, consider Wikipedia, a free and open encyclopedia that has displaced the Encyclopedia Britannica and Microsoft Encarta. Wikipedia is produced and maintained by a community of dispersed enthusiasts primarily driven by other motives than profit maximization.  Furthermore, in the realm of software, see the case of GNU/Linux on which the top 500 supercomputers and the majority of websites run, or the example of the Apache Web Server, the leading software in the web-server market. Wikipedia, Apache and GNU/Linux demonstrate how non-coercive cooperation around globally-shared resources (i.e. a commons) can produce artifacts as innovative, if not more, as those produced by industrial capitalism.

In the same way, the emergence of networked micro-factories are giving rise to new open-source business models in the realm of design and manufacturing. Such spaces can either be makerspaces, fab labs, or other co-working spaces, equipped with local manufacturing technologies, such as 3D printing and CNC machines or traditional low-tech tools and crafts. Moreover, such spaces often offer collaborative environments where people can meet in person, socialize and co-create.

This is the context in which a new mode of production is emerging. This mode builds on the confluence of the digital commons of knowledge, software, and design with local manufacturing technologies.  It can be codified as “design global, manufacture local” following the logic that what is light (knowledge, design) becomes global, while what is heavy (machinery) is local, and ideally shared. Design global, manufacture local (DGML) demonstrates how a technology project can leverage the digital commons to engage the global community in its development, celebrating new forms of cooperation. Unlike large-scale industrial manufacturing, the DGML model emphasizes application that is small-scale, decentralized, resilient, and locally controlled. DGML could recognize the scarcities posed by finite resources and organize material activities accordingly. First, it minimizes the need to ship materials over long distances, because a considerable part of the manufacturing takes place locally. Local manufacturing also makes maintenance easier, and also encourages manufacturers to design products to last as long as possible. Last, DGML optimizes the sharing of knowledge and design as there are no patent costs to pay for….(More)”