Paper by Jason Potts, Ellie Rennie and Jake Goldenfein: “The Smart City agenda of integrating ICT and Internet of Things (IoT) informatic infrastructure to improve the efficiency and adaptability of city governance has been shaping urban development policy for more than a decade now. A smart city has more data, gathered though new and better technology, delivering higher quality city services. In this paper, we explore how blockchain technology could shift the Smart City agenda by altering transaction costs with implications for the coordination of infrastructures and resources. Like the Smart City the Crypto City utilizes data informatics, but can be coordinated through distributed rather than centralized systems. The data infrastructure of the Crypto-City can enable civil society to run local public goods, and facilitate economic and social entrepreneurship. Drawing on economic theory of transaction costs, the paper sets out an explanatory framework for understanding the kinds of new governance mechanisms that may emerge in conjunction with automated systems, including the challenges that blockchain poses for cities….(More)”.
Data and the City
Book edited by Rob Kitchin, Tracey P. Lauriault, and Gavin McArdle: “There is a long history of governments, businesses, science and citizens producing and utilizing data in order to monitor, regulate, profit from and make sense of the urban world. Recently, we have entered the age of big data, and now many aspects of everyday urban life are being captured as data and city management is mediated through data-driven technologies.
Data and the City is the first edited collection to provide an interdisciplinary analysis of how this new era of urban big data is reshaping how we come to know and govern cities, and the implications of such a transformation. This book looks at the creation of real-time cities and data-driven urbanism and considers the relationships at play. By taking a philosophical, political, practical and technical approach to urban data, the authors analyse the ways in which data is produced and framed within socio-technical systems. They then examine the constellation of existing and emerging urban data technologies. The volume concludes by considering the social and political ramifications of data-driven urbanism, questioning whom it serves and for what ends.
This book, the companion volume to 2016’s Code and the City, offers the first critical reflection on the relationship between data, data practices and the city, and how we come to know and understand cities through data. It will be crucial reading for those who wish to understand and conceptualize urban big data, data-driven urbanism and the development of smart cities….(More)”
Data for Development: The Case for Information, Not Just Data
Daniela Ligiero at the Council on Foreign Relations: “When it comes to development, more data is often better—but in the quest for more data, we can often forget about ensuring we have information, which is even more valuable. Information is data that have been recorded, classified, organized, analyzed, interpreted, and translated within a framework so that meaning emerges. At the end of the day, information is what guides action and change.
The need for more data
In 2015, world leaders came together to adopt a new global agenda to guide efforts over the next fifteen years, the Sustainable Development Goals. The High-level Political Forum (HLPF), to be held this year at the United Nations on July 10-19, is an opportunity for review of the 2030 Agenda, and will include an in-depth analysis of seven of the seventeen goals—including those focused on poverty, health, and gender equality. As part of the HLPF, member states are encouraged to undergo voluntary national reviews of progress across goals to facilitate the sharing of experiences, including successes, challenges, and lessons learned; to strengthen policies and institutions; and to mobilize multi-stakeholder support and partnerships for the implementation of the agenda.
A significant challenge that countries continue to face in this process, and one that becomes painfully evident during the HLPF, is the lack of data to establish baselines and track progress. Fortunately, new initiatives aligned with the 2030 Agenda are working to focus on data, such as the Global Partnership for Sustainable Development Data. There are also initiatives focus on collecting more and better data in particular areas, like gender data (e.g., Data2X; UN Women’s Making Every Girl and Woman Count). This work is important and urgently needed.
Data to monitor global progress on the goals is critical to keeping countries accountable to their commitments and allows countries to examine how they are doing across multiple, ambitious goals. However, equally important is the rich, granular national and sub-national level data that can guide the development and implementation of evidence-based, effective programs and policies. These kinds of data are also often lacking or of poor quality, in which case more data and better data is essential. But a frequently-ignored piece of the puzzle at the national level is improved use of the data we already have.
Making the most of the data we have
To illustrate this point, consider the Together for Girls partnership, which was built on obtaining new data where it was lacking and effectively translating it into information to change policies and programs. We are a partnership between national governments, UN agencies and private sector organizations working to break cycles of violence, with special attention to sexual violence against girls. …The first pillar of our work is focused on understanding violence against children within a country, always at the request of the national government. We do this through a national household survey – the Violence Against Children Survey (VACS), led by national governments, CDC, and UNICEF as part of the Together for Girls Partnership….
The truth is there is a plethora of data at the country level, generated by surveys, special studies, administrative systems, private sector, and citizens that can provide meaningful insights across all the development goals.
Connecting the dots
But data—like our programs’—often remain in silos. For example, data focused on violence against children is typically not top of mind for those working on women’s empowerment or adolescent health. Yet, as an example, the VACS can offer valuable information about how sexual violence against girls, as young as 13,is connected to adolescent pregnancy—or how one of the most common perpetrators of sexual violence against girls is a partner, a pattern that starts early and is a predictor for victimization and perpetration later in life. However, these data are not consistently used across actors working on programs related to adolescent pregnancy and violence against women….(More)”.
Open Government: Concepts and Challenges for Public Administration’s Management in the Digital Era
Tippawan Lorsuwannarat in the Journal of Public and Private Management: “This paper has four main objectives. First, to disseminate a study on the meaning and development of open government. Second, to describe the components of an open government. Third, to examine the international movement situation involved with open government. And last, to analyze the challenges related to the application of open government in Thailandus current digital era. The paper suggests four periods of open government by linking to the concepts of public administration in accordance with the use of information technology in the public sector. The components of open government are consistent with the meaning of open government, including open data, open access, and open engagement. The current international situation of open government considers the ranking of open government and open government partnership. The challenges of adopting open government in Thailand include clear policy regarding open government, digital gap, public organizational culture, laws supporting privacy and data infrastructure….(More)”.
Research data infrastructures in the UK
The Open Research Data Task Force : “This report is intended to inform the work of the Open Research Data Task Force, which has been established with the aim of building on the principles set out in Open Research Data Concordat (published in July 2016) to co-ordinate creation of a roadmap to develop the infrastructure for open research data across the UK. As an initial contribution to that work, the report provides an outline of the policy and service infrastructure in the UK as it stands in the first half of 2017, including some comparisons with other countries; and it points to some key areas and issues which require attention. It does not seek to identify possible courses of action, nor even to suggest priorities the Task Force might consider in creating its final report to be published in 2018. That will be the focus of work for the Task Force over the next few months.
Why is this important?
The digital revolution continues to bring fundamental changes to all aspects of research: how it is conducted, the findings that are produced, and how they are interrogated and transmitted not only within the research community but more widely. We are as yet still in the early stages of a transformation in which progress is patchy across the research community, but which has already posed significant challenges for research funders and institutions, as well as for researchers themselves. Research data is at the heart of those challenges: not simply the datasets that provide the core of the evidence analysed in scholarly publications, but all the data created and collected throughout the research process. Such data represents a potentially-valuable resource for people and organisations in the commercial, public and voluntary sectors, as well as for researchers. Access to such data, and more general moves towards open science, are also critically-important in ensuring that research is reproducible, and thus in sustaining public confidence in the work of the research community. But effective use of research data depends on an infrastructure – of hardware, software and services, but also of policies, organisations and individuals operating at various levels – that is as yet far from fully-formed. The exponential increases in volumes of data being generated by researchers create in themselves new demands for storage and computing power. But since the data is characterised more by heterogeneity then by uniformity, development of the infrastructure to manage it involves a complex set of requirements in preparing, collecting, selecting, analysing, processing, storing and preserving that data throughout its life cycle.
Over the past decade and more, there have been many initiatives on the part of research institutions, funders, and members of the research community at local, national and international levels to address some of these issues. Diversity is a key feature of the landscape, in terms of institutional types and locations, funding regimes, and nature and scope of partnerships, as well as differences between disciplines and subject areas. Hence decision-makers at various levels have fostered via their policies and strategies many community-organised developments, as well as their own initiatives and services. Significant progress has been achieved as a result, through the enthusiasm and commitment of key organisations and individuals. The less positive features have been a relative lack of harmonisation or consolidation, and there is an increasing awareness of patchiness in provision, with gaps, overlaps and inconsistencies. This is not surprising, since policies, strategies and services relating to research data necessarily affect all aspects of support for the diverse processes of research itself. Developing new policies and infrastructure for research data implies significant re-thinking of structures and regimes for supporting, fostering and promoting research itself. That in turn implies taking full account of widely-varying characteristics and needs of research of different kinds, while also keeping in clear view the benefits to be gained from better management of research data, and from greater openness in making data accessible for others to re-use for a wide range of different purposes….(More)”.
The State of Mobile Data for Social Good
UN Global Pulse: “This report outlines the value of harnessing mobile data for social good and provides an analysis of the gaps. Its aim is to survey the landscape today, assess the current barriers to scale, and make recommendations for a way forward.
The report reviews the challenges the field is currently facing and discusses a range of issues preventing mobile data from being used for social good. These challenges come from both the demand and supply side of mobile data and from the lack of coordination among stakeholders. It continues by providing a set of recommendations intended to move beyond short-term and ad hoc projects to more systematic and institutionalized implementations that are scalable, replicable, sustainable and focused on impact.
Finally, the report proposes a roadmap for 2018 calling all stakeholders to work on developing a scalable and impactful demonstration project that will help to establish the value of mobile data for social good. The report includes examples of innovation projects and ways in which mobile data is already being used to inform development and humanitarian work. It is intended to inspire social impact organizations and mobile network operators (MNOs) to collaborate in the exploration and application of new data sources, methods and technologies….(More)”
AI and the Law: Setting the Stage
Urs Gasser: “Lawmakers and regulators need to look at AI not as a homogenous technology, but a set of techniques and methods that will be deployed in specific and increasingly diversified applications. There is currently no generally agreed-upon definition of AI. What is important to understand from a technical perspective is that AI is not a single, homogenous technology, but a rich set of subdisciplines, methods, and tools that bring together areas such as speech recognition, computer vision, machine translation, reasoning, attention and memory, robotics and control, etc. ….
Given the breadth and scope of application, AI-based technologies are expected to trigger a myriad of legal and regulatory issues not only at the intersections of data and algorithms, but also of infrastructures and humans. …
When considering (or anticipating) possible responses by the law vis-à-vis AI innovation, it might be helpful to differentiate between application-specific and cross-cutting legal and regulatory issues. …
Information asymmetries and high degrees of uncertainty pose particular difficulty to the design of appropriate legal and regulatory responses to AI innovations — and require learning systems. AI-based applications — which are typically perceived as “black boxes” — affect a significant number of people, yet there are nonetheless relatively few people who develop and understand AI-based technologies. ….Approaches such as regulation 2.0, which relies on dynamic, real-time, and data-driven accountability models, might provide interesting starting points.
The responses to a variety of legal and regulatory issues across different areas of distributed applications will likely result in a complex set of sector-specific norms, which are likely to vary across jurisdictions….
Law and regulation may constrain behavior yet also act as enablers and levelers — and are powerful tools as we aim for the development of AI for social good. …
Law is one important approach to the governance of AI-based technologies. But lawmakers and regulators have to consider the full potential of available instruments in the governance toolbox. ….
In a world of advanced AI technologies and new governance approaches towards them, the law, the rule of law, and human rights remain critical bodies of norms. …
As AI applies to the legal system itself, however, the rule of law might have to be re-imagined and the law re-coded in the longer run….(More).
Index: Collective Intelligence
By Hannah Pierce and Audrie Pirkl
The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on collective intelligence and was originally published in 2017.
The Collective Intelligence Universe
- Amount of money that Reykjavik’s Better Neighbourhoods program has provided each year to crowdsourced citizen projects since 2012: € 2 million (Citizens Foundation)
- Number of U.S. government challenges that people are currently participating in to submit their community solutions: 778 (Challenge.gov).
- Percent of U.S. arts organizations used social media to crowdsource ideas in 2013, from programming decisions to seminar scheduling details: 52% (Pew Research)
- Number of Wikipedia members who have contributed to a page in the last 30 days: over 120,000 (Wikipedia Page Statistics)
- Number of languages that the multinational crowdsourced Letters for Black Lives has been translated into: 23 (Letters for Black Lives)
- Number of comments in a Reddit thread that established a more comprehensive timeline of the theater shooting in Aurora than the media: 1272 (Reddit)
- Number of physicians that are members of SERMO, a platform to crowdsource medical research: 800,000 (SERMO)
- Number of citizen scientist projects registered on SciStarter: over 1,500 (Collective Intelligence 2017 Plenary Talk: Darlene Cavalier)
- Entrants to NASA’s 2009 TopCoder Challenge: over 1,800 (NASA)
Infrastructure
- Number of submissions for Block Holm (a digital platform that allows citizens to build “Minecraft” ideas on vacant city lots) within the first six months: over 10,000 (OpenLearn)
- Number of people engaged to The Participatory Budgeting Project in the U.S.: over 300,000. (Participatory Budgeting Project)
- Amount of money allocated to community projects through this initiative: $238,000,000
Health
- Percentage of Internet-using adults with chronic health conditions that have gone online within the US to connect with others suffering from similar conditions: 23% (Pew Research)
- Number of posts to Patient Opinion, a UK based platform for patients to provide anonymous feedback to healthcare providers: over 120,000 (Nesta)
- Percent of NHS health trusts utilizing the posts to improve services in 2015: 90%
- Stories posted per month: nearly 1,000 (The Guardian)
- Number of tumors reported to the English National Cancer Registration each year: over 300,000 (Gov.UK)
- Number of users of an open source artificial pancreas system: 310 (Collective Intelligence 2017 Plenary Talk: Dana Lewis)
Government
- Number of submissions from 40 countries to the 2017 Open (Government) Contracting Innovation Challenge: 88 (The Open Data Institute)
- Public-service complaints received each day via Indonesian digital platform Lapor!: over 500 (McKinsey & Company)
- Number of registered users of Unicef Uganda’s weekly, SMS poll U-Report: 356,468 (U-Report)
- Number of reports regarding government corruption in India submitted to IPaidaBribe since 2011: over 140,000 (IPaidaBribe)
Business
- Reviews posted since Yelp’s creation in 2009: 121 million reviews (Statista)
- Percent of Americans in 2016 who trust online customer reviews as much as personal recommendations: 84% (BrightLocal)
- Number of companies and their subsidiaries mapped through the OpenCorporates platform: 60 million (Omidyar Network)
Crisis Response
- Number of diverse stakeholders digitally connected to solve climate change problems through the Climate CoLab: over 75,000 (MIT ILP Institute Insider)
- Number of project submissions to USAID’s 2014 Fighting Ebola Grand Challenge: over 1,500 (Fighting Ebola: A Grand Challenge for Development)
- Reports submitted to open source flood mapping platform Peta Jakarta in 2016: 5,000 (The Open Data Institute)
Public Safety
- Number of sexual harassment reports submitted to from 50 cities in India and Nepal to SafeCity, a crowdsourcing site and mobile app: over 4,000 (SafeCity)
- Number of people that used Facebook’s Safety Check, a feature that is being used in a new disaster mapping project, in the first 24 hours after the terror attacks in Paris: 4.1 million (Facebook)
What Bhutanese hazelnuts tell us about using data for good
Bruno Sánchez-Andrade Nuño at WEForum: “How are we going to close the $2.5 trillion/year finance gap to achieve the Sustainable Development Goals (SDGs)? Whose money? What business model? How to scale it that much? If you read the recent development economics scholar literature, or Jim Kim’s new financing approach of the World Bank, you might hear the benefits of “blended finance” or “triple bottom lines.” I want to tell you instead about a real case that makes a dent. I want to tell you about Sonam.
Sonam is a 60-year old farmer in rural Bhutan. His children left for the capital, Thimphu, like many are doing nowadays. Four years ago, he decided to plant 2 acres of hazelnuts on an unused rocky piece of his land. Hazelnut saplings, training, and regular supervision all come from “Mountain Hazelnuts”, Bhutan’s only 100% foreign invested company. They fund the costs of the trees and helps him manage his orchard. In return, when the nuts come, he will sell his harvest to them above the guaranteed floor price, which will double his income; in a time when he will be too old to work in his rice field.
You could find similar impact stories for the roughly 10,000 farmers participating in this operation across the country, where the farmers are carefully selected to ensure productivity, maximize social and environmental benefits, such as vulnerable households, or reducing land erosion.
But Sonam also gets a visit from Kinzang every month. This is Kinzang’s first job. Otherwise, he would have moved to the city in hopes of finding a low paying job, but more likely joining the many unemployed youth from the countryside. Kinzang carefully records data on his smart-phone, talks to Sonam and digitally transmits the data back to the company HQ. There, if a problem is recorded with irrigation, pests, or there is any data anomaly, a team of experts (locally trained agronomists) will visit his orchard to figure out a solution.
The whole system of support, monitoring, and optimization live on a carefully crafted data platform that feeds information to and from the farmers, the monitors, the agronomist experts, and local government authorities. It ensures that all 10 million trees are healthy and productive, minimizes extra costs, tests and tracks effectiveness of new treatments….
This is also a story which demonstrates how “Data is the new oil” is not the right approach. If Data is the new oil, you extract value from the data, without much regard to feeding back value to the source of the data. However, in this system, “Data is the new soil.” Data creates a higher ground in which value flows back and forth. It lifts the source of the data -the farmers- into new income generation, it enables optimized operations; and it also helps the whole country: Much of the data (such as road quality used by the monitors) is made open for the benefit of the Bhutanese people, without contradiction or friction with the business model….(More)”.
A Road-Map To Transform The Secure And Accessible Use Of Data For High Impact Program Management, Policy Development, And Scholarship
Preface and Roadmap by Andrew Reamer and Julia Lane: “Throughout the United States, there is broadly emerging support to significantly enhance the nation’s capacity for evidence-based policymaking. This support is shared across the public and private sectors and all levels of geography. In recent years, efforts to enable evidence-based analysis have been authorized by the U.S. Congress, and funded by state and local governments, philanthropic foundations.
The potential exists for substantial change. There has been dramatic growth in technological capabilities to organize, link, and analyze massive volumes of data from multiple, disparate sources. A major resource is administrative data, which offer both advantages and challenges in comparison to data gathered through the surveys that have been the basis for much policymaking to date. To date, however, capability-building efforts have been largely “artisanal” in nature. As a result, the ecosystem of evidence-based policymaking capacity-building efforts is thin and weakly connected.
Each attempt to add a node to the system faces multiple barriers that require substantial time, effort, and luck to address. Those barriers are systemic. Too much attention is paid to the interests of researchers, rather than in the engagement of data producers. Individual projects serve focused needs and operate at a relative distance from one another Researchers, policymakers and funding agencies thus need exists to move from these artisanal efforts to new, generalized solutions that will catalyze the creation of a robust, large-scale data infrastructure for evidence-based policymaking.
This infrastructure will have be a “complex, adaptive ecosystem” that expands, regenerates, and replicates as needed while allowing customization and local control. To create a path for achieving this goal, the U.S. Partnership on Mobility from Poverty commissioned 12 papers and then hosted a day-long gathering (January 23, 2017) of over 60 experts to discuss findings and implications for action. Funded by the Gates Foundation, the papers and workshop panels were organized around three topics: privacy and confidentiality, data providers, and comprehensive strategies.
This issue of the Annals showcases those 12 papers which jointly propose solutions for catalyzing the development of a data infrastructure for evidence-based policymaking.
This preface:
- places current evidence-based policymaking efforts in historical context
- briefly describes the nature of multiple current efforts,
- provides a conceptual framework for catalyzing the growth of any large institutional ecosystem,
- identifies the major dimensions of the data infrastructure ecosystem,
- describes key barriers to the expansion of that ecosystem, and
- suggests a roadmap for catalyzing that expansion….(More)
(All 12 papers can be accessed here).