Searching for the Smart City’s Democratic Future


Article by Bianca Wylie at the Center for International Governance Innovation: “There is a striking blue building on Toronto’s eastern waterfront. Wrapped top to bottom in bright, beautiful artwork by Montreal illustrator Cecile Gariepy, the building — a former fish-processing plant — stands out alongside the neighbouring parking lots and a congested highway. It’s been given a second life as an office for Sidewalk Labs — a sister company to Google that is proposing a smart city development in Toronto. Perhaps ironically, the office is like the smart city itself: something old repackaged to be light, fresh and novel.

“Our mission is really to use technology to redefine urban life in the twenty-first century.”

Dan Doctoroff, CEO of Sidewalk Labs, shared this mission in an interview with Freakonomics Radio. The phrase is a variant of the marketing language used by the smart city industry at large. Put more simply, the term “smart city” is usually used to describe the use of technology and data in cities.

No matter the words chosen to describe it, the smart city model has a flaw at its core: corporations are seeking to exert influence on urban spaces and democratic governance. And because most governments don’t have the policy in place to regulate smart city development — in particular, projects driven by the fast-paced technology sector — this presents a growing global governance concern.

This is where the story usually descends into warnings of smart city dystopia or failure. Loads of recent articles have detailed the science fiction-style city-of-the-future and speculated about the perils of mass data collection, and for good reason — these are important concepts that warrant discussion. It’s time, however, to push past dystopian narratives and explore solutions for the challenges that smart cities present in Toronto and globally…(More)”.

Data Publics: Urban Protest, Analytics and the Courts


Article by Anthony McCosker and Timothy Graham in MC Journal: “There are many examples globally of the use of social media to engage publics in battles over urban development or similar issues (e.g. Fredericks and Foth). Some have asked how social media might be better used by neighborhood organisations to mobilise protest and save historic buildings, cultural landmarks or urban sites (Johnson and Halegoua). And we can only note here the wealth of research literature on social movements, protest and social media. To emphasise Gerbaudo’s point, drawing on Mattoni, we “need to account for how exactly the use of these media reshapes the ‘repertoire of communication’ of contemporary movements and affects the experience of participants” (2). For us, this also means better understanding the role that social data plays in both aiding and reshaping urban protest or arming third sector groups with evidence useful in social institutions such as the courts.

New modes of digital engagement enable forms of distributed digital citizenship, which Meikle sees as the creative political relationships that form through exercising rights and responsibilities. Associated with these practices is the transition from sanctioned, simple discursive forms of social protest in petitions, to new indicators of social engagement in more nuanced social media data and the more interactive forms of online petition platforms like change.org or GetUp (Halpin et al.). These technical forms code publics in specific ways that have implications for contemporary protest action. That is, they provide the operational systems and instructions that shape social actions and relationships for protest purposes (McCosker and Milne).

All protest and social movements are underwritten by explicit or implicit concepts of participatory publics as these are shaped, enhanced, or threatened by communication technologies. But participatory protest publics are uneven, and as Kelty asks: “What about all the people who are neither protesters nor Twitter users? In the broadest possible sense this ‘General Public’ cannot be said to exist as an actual entity, but only as a kind of virtual entity” (27). Kelty is pointing to the porous boundary between a general public and an organised public, or formal enterprise, as a reminder that we cannot take for granted representations of a public, or the public as a given, in relation to Like or follower data for instance.

If carefully gauged, the concept of data publics can be useful. To start with, the notions of publics and publicness are notoriously slippery. Baym and boyd explore the differences between these two terms, and the way social media reconfigures what “public” is. Does a Comment or a Like on a Facebook Page connect an individual sufficiently to an issues-public? As far back as the 1930s, John Dewey was seeking a pragmatic approach to similar questions regarding human association and the pluralistic space of “the public”. For Dewey, “the machine age has so enormously expanded, multiplied, intensified and complicated the scope of the indirect consequences [of human association] that the resultant public cannot identify itself” (157). To what extent, then, can we use data to constitute a public in relation to social protest in the age of data analytics?

There are numerous well formulated approaches to studying publics in relation to social media and social networks. Social network analysis (SNA) determines publics, or communities, through links, ties and clustering, by measuring and mapping those connections and to an extent assuming that they constitute some form of sociality. Networked publics (Ito, 6) are understood as an outcome of social media platforms and practices in the use of new digital media authoring and distribution tools or platforms and the particular actions, relationships or modes of communication they afford, to use James Gibson’s sense of that term. “Publics can be reactors, (re)makers and (re)distributors, engaging in shared culture and knowledge through discourse and social exchange as well as through acts of media reception” (Ito 6). Hashtags, for example, facilitate connectivity and visibility and aid in the formation and “coordination of ad hoc issue publics” (Bruns and Burgess 3). Gray et al., following Ruppert, argue that “data publics are constituted by dynamic, heterogeneous arrangements of actors mobilised around data infrastructures, sometimes figuring as part of them, sometimes emerging as their effect”. The individuals of data publics are neither subjugated by the logics and metrics of digital platforms and data structures, nor simply sovereign agents empowered by the expressive potential of aggregated data (Gray et al.).

Data publics are more than just aggregates of individual data points or connections. They are inherently unstable, dynamic (despite static analysis and visualisations), or vibrant, and ephemeral. We emphasise three key elements of active data publics. First, to be more than an aggregate of individual items, a data public needs to be consequential (in Dewey’s sense of issues or problem-oriented). Second, sufficient connection is visible over time. Third, affective or emotional activity is apparent in relation to events that lend coherence to the public and its prevailing sentiment. To these, we add critical attention to the affordising processes – or the deliberate and incidental effects of datafication and analysis, in the capacities for data collection and processing in order to produce particular analytical outcomes, and the data literacies these require. We return to the latter after elaborating on the Save the Palace case….(More)”.

To the smart city and beyond? Developing a typology of smart urban innovation


Maja Nilssen in Technological Forecasting and Social Change: “The smart city is an increasingly popular topic in urban development, arousing both excitement and skepticism. However, despite increasing enthusiasm regarding the smartness of cities, the concept is still regarded as somewhat evasive. Encouraged by the multifaceted character of the concept, this article examines how we can categorize the different dimensions often included in the smart city concept, and how these dimensions are coupled to innovation. Furthermore, the article examines the implications of the different understandings of the smart city concept for cities’ abilities to be innovative.

Building on existing scholarly contributions on the smartness of cities and innovation literature, the article develops a typology of smart city initiatives based on the extent and types of innovations they involve. The typology is structured as a smart city continuum, comprising four dimensions of innovation: (1) technological, (2) organizational, (3) collaborative, (4) experimental.

The smart city continuum is then utilized to analyze empirical data from a Norwegian urban development project triggered by a critical juncture. The empirical data shows that the case holds elements of different dimensions of the continuum, supporting the need for a typology of smart cities as multifaceted urban innovation. The continuum can be used as an analytical model for different types of smart city initiatives, and thus shed light on what types of innovation are central in the smart city. Consequently, the article offers useful insights for both practitioners and scholars interested in smart city initiatives….(More)”

How Taiwan’s online democracy may show future of humans and machines


Shuyang Lin at the Sydney Morning Herald: “Taiwanese citizens have spent the past 30 years prototyping future democracy since the lift of martial law in 1987. Public participation in Taiwan has been developed in several formats, from face-to-face to deliberation over the internet. This trajectory coincides with the advancement of technology, and as new tools arrived, democracy evolved.

The launch of vTaiwan (v for virtual, vote, voice and verb), an experiment that prototypes an open consultation process for the civil society, showed that by using technology creatively humanity can facilitate deep and fair conversations, form collective consensus, and deliver solutions we can all live with.

It is a prototype that helps us envision what future democracy could look like….

Decision-making is not an easy task, especially when it has to do with a larger group of people. Group decision-making could take several protocols, such as mandate, to decide and take questions; advise, to listen before decisions; consent, to decide if no one objects; and consensus, to decide if everyone agrees. So there is a pressing need for us to be able to collaborate together in a large scale decision-making process to update outdated standards and regulations.

The future of human knowledge is on the web. Technology can help us to learn, communicate, and make better decisions faster with larger scale. The internet could be the facilitation and AI could be the catalyst. It is extremely important to be aware that decision-making is not a one-off interaction. The most important direction of decision-making technology development is to have it allow humans to be engaged in the process anytime and also have an invitation to request and submit changes.

Humans have started working with computers, and we will continue to work with them. They will help us in the decision-making process and some will even make decisions for us; the actors in collaboration don’t necessarily need to be just humans. While it is up to us to decide what and when to opt in or opt out, we should work together with computers in a transparent, collaborative and inclusive space.

Where shall we go as a society? What do we want from technology? As Audrey Tang,  Digital Minister without Portfolio of Taiwan, puts it: “Deliberation — listening to each other deeply, thinking together and working out something that we can all live with — is magical.”…(More)”.

Introducing the (World’s First) Ethical Operating System


Article by Paula Goldman and Raina Kumra: “Is it possible for tech developers to anticipate future risks? Or are these future risks so unknowable to us here in the present that, try as we might to make our tech safe, continued exposure to risks is simply the cost of engagement?

 Today, in collaboration with Institute for the Future (IFTF), a leading non-profit strategic futures organization, Omidyar Network is excited to introduce the Ethical Operating System (or Ethical OS for short), a toolkit for helping developers and designers anticipate the future impact of technologies they’re working on today. We designed the Ethical OS to facilitate better product development, faster deployment, and more impactful innovation — all while striving to minimize technical and reputational risks. The hope is that, with the Ethical OS in hand, technologists can begin to build responsibility into core business and product decisions, and contribute to a thriving tech industry.

The Ethical OS is already being piloted by nearly 20 tech companies, schools, and startups, including Mozilla and Techstars. We believe it can better equip technologists to grapple with three of the most pressing issues facing our community today:

    • If the technology you’re building right now will someday be used in unexpected ways, how can you hope to be prepared?

 

    • What new categories of risk should you pay special attention to right now?

 

  • Which design, team, or business model choices can actively safeguard users, communities, society, and your company from future risk?

As large sections of the public grow weary of a seemingly constant stream of data safety and security issues, and with growing calls for heightened government intervention and oversight, the time is now for the tech community to get this right.

We created the Ethical OS as a pilot to help make ethical thinking and future risk mitigation integral components of all design and development processes. It’s not going to be easy. The industry has far more work to do, both inside individual companies and collectively. But with our toolkit as a guide, developers will have a practical means of helping to begin working to ensure their tech is as good as their intentions…(More)”.

Mapping the Privacy-Utility Tradeoff in Mobile Phone Data for Development


Paper by Alejandro Noriega-Campero, Alex Rutherford, Oren Lederman, Yves A. de Montjoye, and Alex Pentland: “Today’s age of data holds high potential to enhance the way we pursue and monitor progress in the fields of development and humanitarian action. We study the relation between data utility and privacy risk in large-scale behavioral data, focusing on mobile phone metadata as paradigmatic domain. To measure utility, we survey experts about the value of mobile phone metadata at various spatial and temporal granularity levels. To measure privacy, we propose a formal and intuitive measure of reidentification riskthe information ratioand compute it at each granularity level. Our results confirm the existence of a stark tradeoff between data utility and reidentifiability, where the most valuable datasets are also most prone to reidentification. When data is specified at ZIP-code and hourly levels, outside knowledge of only 7% of a person’s data suffices for reidentification and retrieval of the remaining 93%. In contrast, in the least valuable dataset, specified at municipality and daily levels, reidentification requires on average outside knowledge of 51%, or 31 data points, of a person’s data to retrieve the remaining 49%. Overall, our findings show that coarsening data directly erodes its value, and highlight the need for using data-coarsening, not as stand-alone mechanism, but in combination with data-sharing models that provide adjustable degrees of accountability and security….(More)”.

Buzzwords and tortuous impact studies won’t fix a broken aid system


The Guardian: “Fifteen leading economists, including three Nobel winners, argue that the many billions of dollars spent on aid can do little to alleviate poverty while we fail to tackle its root causes….Donors increasingly want to see more impact for their money, practitioners are searching for ways to make their projects more effective, and politicians want more financial accountability behind aid budgets. One popular option has been to audit projects for results. The argument is that assessing “aid effectiveness” – a buzzword now ubiquitous in the UK’s Department for International Development – will help decide what to focus on.

Some go so far as to insist that development interventions should be subjected to the same kind of randomised control trials used in medicine, with “treatment” groups assessed against control groups. Such trials are being rolled out to evaluate the impact of a wide variety of projects – everything from water purification tablets to microcredit schemes, financial literacy classes to teachers’ performance bonuses.

Economist Esther Duflo at MIT’s Poverty Action Lab recently argued in Le Monde that France should adopt clinical trials as a guiding principle for its aid budget, which has grown significantly under the Macron administration.

But truly random sampling with blinded subjects is almost impossible in human communities without creating scenarios so abstract as to tell us little about the real world. And trials are expensive to carry out, and fraught with ethical challenges – especially when it comes to health-related interventions. (Who gets the treatment and who doesn’t?)

But the real problem with the “aid effectiveness” craze is that it narrows our focus down to micro-interventions at a local level that yield results that can be observed in the short term. At first glance this approach might seem reasonable and even beguiling. But it tends to ignore the broader macroeconomic, political and institutional drivers of impoverishment and underdevelopment. Aid projects might yield satisfying micro-results, but they generally do little to change the systems that produce the problems in the first place. What we need instead is to tackle the real root causes of poverty, inequality and climate change….(More)”.

E-Participation in Smart Cities: Technologies and Models of Governance for Citizen Engagement


Book by Manuel Pedro Rodríguez Bolívar and Laura Alcaide Muñoz: “This book analyzes e-participation in smart cities.  In recent decades, information and communication technologies (ICT) have played a key role in the democratic political and governance process by allowing easier interaction between governments and citizens, and the increased ability of citizens to participate in the production chain of public services.  E-participation plays and important role in the development of smart cities and smart communities , but it has not yet been extensively studied.  This book fills that gap by combining empirical and theoretical research to analyze actual practices of citizen involvement in smart cities and build a solid framework for successful e-participation in smart cities.

The book is divided into three parts.  Part I discusses smart technologies and their role in improving e-participation in smart cities.  Part II deals with models of e-participation in smart cities and the organization issues affecting the implementation of e-participation; these chapters analyze the efficiency of governance models in relation to the establishment of smart cities.  Part III proposes incentives to motivate increased participation by governments and cititzenry within the smart cities context.  Written by an international panel of experts and practitioners, this book will be a convenient source of information on e-participation in smart cities and will be valuable to academics, researchers, policy-makers, public managers, citizens, international organizations and anyone who has a stake in enhancing citizen engagement in smart cities….(More)”.

Satellites can advance sustainable development by highlighting poverty


Cordis: “Estimating poverty is crucial for improving policymaking and advancing the sustainability of a society. Traditional poverty estimation methods such as household surveys and census data incur huge costs however, creating a need for more efficient approaches.

With this in mind, the EU-funded USES project examined how satellite images could be used to estimate household-level poverty in rural regions of developing countries. “This promises to be a radically more cost-effective way of monitoring and evaluating the Sustainable Development Goals,” says Dr Gary Watmough, USES collaborator and Interdisciplinary Lecturer in Land Use and Socioecological Systems at the University of Edinburgh, United Kingdom.

Land use and land cover reveal poverty clues

To achieve its aims, the project investigated how land use and land cover information from satellite data could be linked with household survey data. “We looked particularly at how households use the landscape in the local area for agriculture and other purposes such as collecting firewood and using open areas for grazing cattle,” explains Dr Watmough.

The work also involved examining satellite images to determine which types of land use were related to household wealth or poverty using statistical analysis. “By trying to predict household poverty using the land use data we could see which land use variables were most related to the household wealth in the area,” adds Dr Watmough.

Overall, the USES project found that satellite data could predict poverty particularly the poorest households in the area. Dr Watmough comments: “This is quite remarkable given that we are trying to predict complicated household-level poverty from a simple land use map derived from high-resolution satellite data.”

A study conducted by USES in Kenya found that the most important remotely sensed variable was building size within the homestead. Buildings less than 140 m2 were mostly associated with poorer households, whereas those over 140 m2 tended to be wealthier. The amount of bare ground in agricultural fields and within the homestead region was also important. “We also found that poorer households were associated with a shorter number of agricultural growing days,” says Dr Watmough….(More)”.

What’s Wrong with Public Policy Education


Francis Fukuyama at the American Interest: “Most programs train students to become capable policy analysts, but with no understanding of how to implement those policies in the real world…Public policy education is ripe for an overhaul…

Public policy education in most American universities today reflects a broader problem in the social sciences, which is the dominance of economics. Most programs center on teaching students a battery of quantitative methods that are useful in policy analysis: applied econometrics, cost-benefit analysis, decision analysis, and, most recently, use of randomized experiments for program evaluation. Many schools build their curricula around these methods rather than the substantive areas of policy such as health, education, defense, criminal justice, or foreign policy. Students come out of these programs qualified to be policy analysts: They know how to gather data, analyze it rigorously, and evaluate the effectiveness of different public policy interventions. Historically, this approach started with the Rand Graduate School in the 1970s (which has subsequently undergone a major re-thinking of its approach).

There is no question that these skills are valuable and should be part of a public policy education.  The world has undergone a revolution in recent decades in terms of the role of evidence-based policy analysis, where policymakers can rely not just on anecdotes and seat-of-the-pants assessments, but statistically valid inferences that intervention X is likely to result in outcome Y, or that the millions of dollars spent on policy Z has actually had no measurable impact. Evidence-based policymaking is particularly necessary in the age of Donald Trump, amid the broad denigration of inconvenient facts that do not suit politicians’ prior preferences.

But being skilled in policy analysis is woefully inadequate to bring about policy change in the real world. Policy analysis will tell you what the optimal policy should be, but it does not tell you how to achieve that outcome.

The world is littered with optimal policies that don’t have a snowball’s chance in hell of being adopted. Take for example a carbon tax, which a wide range of economists and policy analysts will tell you is the most efficient way to abate carbon emissions, reduce fossil fuel dependence, and achieve a host of other desired objectives. A carbon tax has been a nonstarter for years due to the protestations of a range of interest groups, from oil and chemical companies to truckers and cabbies and ordinary drivers who do not want to pay more for the gas they use to commute to work, or as inputs to their industrial processes. Implementing a carbon tax would require a complex strategy bringing together a coalition of groups that are willing to support it, figuring out how to neutralize the die-hard opponents, and convincing those on the fence that the policy would be a good, or at least a tolerable, thing. How to organize such a coalition, how to communicate a winning message, and how to manage the politics on a state and federal level would all be part of a necessary implementation strategy.

It is entirely possible that an analysis of the implementation strategy, rather than analysis of the underlying policy, will tell you that the goal is unachievable absent an external shock, which might then mean changing the scope of the policy, rethinking its objectives, or even deciding that you are pursuing the wrong objective.

Public policy education that sought to produce change-makers rather than policy analysts would therefore have to be different.  It would continue to teach policy analysis, but the latter would be a small component embedded in a broader set of skills.

The first set of skills would involve problem definition. A change-maker needs to query stakeholders about what they see as the policy problem, understand the local history, culture, and political system, and define a problem that is sufficiently narrow in scope that it can plausibly be solved.

At times reformers start with a favored solution without defining the right problem. A student I know spent a summer working at an NGO in India advocating use of electric cars in the interest of carbon abatement. It turns out, however, that India’s reliance on coal for marginal electricity generation means that more carbon would be put in the air if the country were to switch to electric vehicles, not less, so the group was actually contributing to the problem they were trying to solve….

The second set of skills concerns solutions development. This is where traditional policy analysis comes in: It is important to generate data, come up with a theory of change, and posit plausible options by which reformers can solve the problem they have set for themselves. This is where some ideas from product design, like rapid prototyping and testing, may be relevant.

The third and perhaps most important set of skills has to do with implementation. This begins necessarily with stakeholder analysis: that is, mapping of actors who are concerned with the particular policy problem, either as supporters of a solution, or opponents who want to maintain the status quo. From an analysis of the power and interests of the different stakeholders, one can begin to build coalitions of proponents, and think about strategies for expanding the coalition and neutralizing those who are opposed.  A reformer needs to think about where resources can be obtained, and, very critically, how to communicate one’s goals to the stakeholder audiences involved. Finally comes testing and evaluation—not in the expectation that there will be a continuous and rapid iterative process by which solutions are tried, evaluated, and modified. Randomized experiments have become the gold standard for program evaluation in recent years, but their cost and length of time to completion are often the enemies of rapid iteration and experimentation….(More) (see also http://canvas.govlabacademy.org/).