Blockchain and distributed ledger technologies in the humanitarian sector


Report by Giulio Coppi and Larissa Fast at ODI (Overseas Development Institute): “Blockchain and the wider category of distributed ledger technologies (DLTs) promise a more transparent, accountable, efficient and secure way of exchanging decentralised stores of information that are independently updated, automatically replicated and immutable. The key components of DLTs include shared recordkeeping, multi-party consensus, independent validation, tamper evidence and tamper resistance.

Building on these claims, proponents suggest DLTs can address common problems of non-profit organisations and NGOs, such as transparency, efficiency, scale and sustainability. Current humanitarian uses of DLT, illustrated in this report, include financial inclusion, land titling, remittances, improving the transparency of donations, reducing fraud, tracking support to beneficiaries from multiple sources, transforming governance systems, micro-insurance, cross-border transfers, cash programming, grant management and organisational governance.

This report, commissioned by the Global Alliance for Humanitarian Innovation (GAHI), examines current DLT uses by the humanitarian sector to outline lessons for the project, policy and system levels. It offers recommendations to address the challenges that must be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts….(More)”.

Evolving Measurement for an Evolving Economy: Thoughts on 21st Century US Economic Statistics


Ron S. Jarmin at the Journal of Economic Perspectives: “The system of federal economic statistics developed in the 20th century has served the country well, but the current methods for collecting and disseminating these data products are unsustainable. These statistics are heavily reliant on sample surveys. Recently, however, response rates for both household and business surveys have declined, increasing costs and threatening quality. Existing statistical measures, many developed decades ago, may also miss important aspects of our rapidly evolving economy; moreover, they may not be sufficiently accurate, timely, or granular to meet the increasingly complex needs of data users. Meanwhile, the rapid proliferation of online data and more powerful computation make privacy and confidentiality protections more challenging. There is broad agreement on the need to transform government statistical agencies from the 20th century survey-centric model to a 21st century model that blends structured survey data with administrative and unstructured alternative digital data sources. In this essay, I describe some work underway that hints at what 21st century official economic measurement will look like and offer some preliminary comments on what is needed to get there….(More)”.

Privacy and Smart Cities: A Canadian Survey


Report by Sara Bannerman and Angela Orasch: “This report presents the findings of a national survey of Canadians about smart-city privacy conducted in October and November 2018. Our research questions were: How concerned are Canadians about smart-city privacy? How do these concerns intersect with age, gender, ethnicity, and location? Moreover, what are the expectations of Canadians with regards to their ability to control, use, or opt-out of data collection in smart-city context? What rights and privileges do Canadians feel are appropriate with regard to data self-determination, and what types of data are considered more sensitive than others?

What is a smart city?
A ‘smart city’ adopts digital and data-driven technologies in the planning, management and delivery of municipal services. Information and communications technologies (ICTs), data analytics, and the internet of
things (IoT) are some of the main components of these technologies, joined by web design, online marketing campaigns and digital services. Such technologies can include smart utility and transportation infrastructure, smart cards, smart transit, camera and sensor networks, or data collection by businesses to provide customized advertisements or other services. Smart-city technologies “monitor, manage and regulate city flows and processes, often in real-time” (Kitchin 2014, 2).

In 2017, a framework agreement was established between Waterfront Toronto, the organization charged with revitalizing Toronto’s waterfront, and Sidewalk Labs, parent company of Google, to develop a smart city on Toronto’s Eastern waterfront (Sidewalk Toronto 2018). This news was met with questions and concerns from experts in data privacy and the public at large regarding what was to be included in Sidewalk Lab’s smart-city vision. How would the overall governance structure function? How were the privacy rights of residents going to be protected, and what mechanisms, if any, would ensure that protection? The Toronto waterfront is just one of numerous examples of smart-city developments….(More)”.

Consumers kinda, sorta care about their data


Kim Hart at Axios: “A full 81% of consumers say that in the past year they’ve become more concerned with how companies are using their data, and 87% say they’ve come to believe companies that manage personal data should be more regulated, according to a survey out Monday by IBM’s Institute for Business Value.

Yes, but: They aren’t totally convinced they should care about how their data is being used, and many aren’t taking meaningful action after privacy breaches, according to the survey. Despite increasing data risks, 71% say it’s worth sacrificing privacy given the benefits of technology.Show less

By the numbers:

  • 89% say technology companies need to be more transparent about their products
  • 75% say that in the past year they’ve become less likely to trust companies with their personal data
  • 88% say the emergence of technologies like AI increase the need for clear policies about the use of personal data.

The other side: Despite increasing awareness of privacy and security breaches, most consumers aren’t taking consequential action to protect their personal data.

  • Fewer than half (45%) report that they’ve updated privacy settings, and only 16% stopped doing business with an entity due to data misuse….(More)”.

The Stanford Open Policing Project


About: “On a typical day in the United States, police officers make more than 50,000 traffic stops. Our team is gathering, analyzing, and releasing records from millions of traffic stops by law enforcement agencies across the country. Our goal is to help researchers, journalists, and policymakers investigate and improve interactions between police and the public.

Currently, a comprehensive, national repository detailing interactions between police and the public doesn’t exist. That’s why the Stanford Open Policing Project is collecting and standardizing data on vehicle and pedestrian stops from law enforcement departments across the country — and we’re making that information freely available. We’ve already gathered 130 million records from 31 state police agencies and have begun collecting data on stops from law enforcement agencies in major cities, as well.

We, the Stanford Open Policing Project, are an interdisciplinary team of researchers and journalists at Stanford University. We are committed to combining the academic rigor of statistical analysis with the explanatory power of data journalism….(More)”.

Algorithmic fairness: A code-based primer for public-sector data scientists


Paper by Ken Steif and Sydney Goldstein: “As the number of government algorithms grow, so does the need to evaluate algorithmic fairness. This paper has three goals. First, we ground the notion of algorithmic fairness in the context of disparate impact, arguing that for an algorithm to be fair, its predictions must generalize across different protected groups. Next, two algorithmic use cases are presented with code examples for how to evaluate fairness. Finally, we promote the concept of an open source repository of government algorithmic “scorecards,” allowing stakeholders to compare across algorithms and use cases….(More)”.

State Capability, Policymaking and the Fourth Industrial Revolution


Demos Helsinki: “The world as we know it is built on the structures of the industrial era – and these structures are falling apart. Yet the vision of a new, sustainable and fair post-industrial society remains unclear. This discussion paper is the result of a collaboration between a group of organisations interested in the implications of the rapid technological development to policymaking processes and knowledge systems that inform policy decisions.

In the discussion paper, we set out to explore what the main opportunities and concerns that accompany the Fourth Industrial Revolution for policymaking and knowledge systems are particularly in middle-income countries. Overall, middle-income countries are home to five billion of the world’s seven billion people and 73 per cent of the world’s poor people; they represent about one-third of the global Gross Domestic Product (GDP) and are major engines of global growth (World Bank 2018).

The paper is co-produced with Capability (Finland), Demos Helsinki (Finland), HELVETAS Swiss Intercooperation (Switzerland), Politics & Ideas (global), Southern Voice (global), UNESCO Montevideo (Uruguay) and Using Evidence (Canada).

The guiding questions for this paper are:

– What are the critical elements of the Fourth Industrial Revolution?

– What does the literature say about the impact of this revolution on societies and economies, and in particular on middle-income countries?

– What are the implications of the Fourth Industrial Revolution for the achievement of the Sustainable Development Goals (SDGs) in middle-income countries?

– What does the literature say about the challenges for governance and the ways knowledge can inform policy during the Fourth Industrial Revolution?…(More)”.

Full discussion paper“State Capability, Policymaking and the Fourth Industrial Revolution: Do Knowledge Systems Matter?”

Using digital technologies to improve the design and enforcement of public policies


OECD Digital Economy Paper: “Digitalisation is having a profound impact on social and economic activity. While often benefiting from a very long history of public investment in R&D, digitalisation has been largely driven by the private sector. However, the combined adoption of new digital technologies, increased reliance upon new data sources, and use of advanced analytic methods hold significant potential to: i) improve the effectiveness and enforcement of public policies; ii) enable innovative policy design and impact evaluation, and; iii) expand citizen and stakeholder engagement in policy making and implementation. These benefits are likely to be greatest in policy domains where outcomes are only observable at significant cost and/or where there is significant heteroregeneity in responses across different agents. In this paper we provide a review of initiatives across a number of fields including: competition, education, environment, innovation, and taxation….(More)”.

Claudette: an automated detector of potentially unfair clauses in online terms of service


Marco Lippi et al in AI and the Law Journal: “Terms of service of on-line platforms too often contain clauses that are potentially unfair to the consumer. We present an experimental study where machine learning is employed to automatically detect such potentially unfair clauses. Results show that the proposed system could provide a valuable tool for lawyers and consumers alike….(More)”.

Achieving Digital Permanence


Raymond Blum with Betsy Beyer at ACM Queu: “Digital permanence has become a prevalent issue in society. This article focuses on the forces behind it and some of the techniques to achieve a desired state in which “what you read is what was written.” While techniques that can be imposed as layers above basic data stores—blockchains, for example—are valid approaches to achieving a system’s information assurance guarantees, this article won’t discuss them.

First, let’s define digital permanence and the more basic concept of data integrity.

Data integrity is the maintenance of the accuracy and consistency of stored information. Accuracy means that the data is stored as the set of values that were intended. Consistency means that these stored values remain the same over time—they do not unintentionally waver or morph as time passes.

Digital permanence refers to the techniques used to anticipate and then meet the expected lifetime of data stored in digital media. Digital permanence not only considers data integrity, but also targets guarantees of relevance and accessibility: the ability to recall stored data and to recall it with predicted latency and at a rate acceptable to the applications that require that information.

To illustrate the aspects of relevance and accessibility, consider two counterexamples: journals that were safely stored redundantly on Zip drives or punch cards may as well not exist if the hardware required to read the media into a current computing system isn’t available. Nor is it very useful to have receipts and ledgers stored on a tape medium that will take eight days to read in when you need the information for an audit on Thursday.

The Multiple Facets of Digital Permanence

Human memory is the most subjective record imaginable. Common adages and clichés such as “He said, she said,” “IIRC (If I remember correctly),” and “You might recall” recognize the truth of memories—that they are based only on fragments of the one-time subjective perception of any objective state of affairs. What’s more, research indicates that people alter their memories over time. Over the years, as the need to provide a common ground for actions based on past transactions arises, so does the need for an objective record of fact—an independent “true” past. These records must be both immutable to a reasonable degree and durable. Media such as clay tablets, parchment, photographic prints, and microfiche became popular because they satisfied the “write once, read many” requirement of society’s record keepers.

Information storage in the digital age has evolved to fit the scale of access (frequent) and volume (high) by moving to storage media that record and deliver information in an almost intangible state. Such media have distinct advantages: electrical impulses and the polarity of magnetized ferric compounds can be moved around at great speed and density. These media, unfortunately, also score higher in another measure: fragility. Paper and clay can survive large amounts of neglect and punishment, but a stray electromagnetic discharge or microscopic rupture can render a digital library inaccessible or unrecognizable.

It stands to reason that storing permanent records in some immutable and indestructible medium would be ideal—something that, once altered to encode information, could never be altered again, either by an overwrite or destruction. Experience shows that such ideals are rarely realized; with enough force and will, the hardest stone can be broken and the most permanent markings defaced.

In considering and ensuring digital permanence, you want to guard against two different failures: the destruction of the storage medium, and a loss of the integrity or “truthfulness” of the records….(More)”.