Searching for the Smart City’s Democratic Future


Article by Bianca Wylie at the Center for International Governance Innovation: “There is a striking blue building on Toronto’s eastern waterfront. Wrapped top to bottom in bright, beautiful artwork by Montreal illustrator Cecile Gariepy, the building — a former fish-processing plant — stands out alongside the neighbouring parking lots and a congested highway. It’s been given a second life as an office for Sidewalk Labs — a sister company to Google that is proposing a smart city development in Toronto. Perhaps ironically, the office is like the smart city itself: something old repackaged to be light, fresh and novel.

“Our mission is really to use technology to redefine urban life in the twenty-first century.”

Dan Doctoroff, CEO of Sidewalk Labs, shared this mission in an interview with Freakonomics Radio. The phrase is a variant of the marketing language used by the smart city industry at large. Put more simply, the term “smart city” is usually used to describe the use of technology and data in cities.

No matter the words chosen to describe it, the smart city model has a flaw at its core: corporations are seeking to exert influence on urban spaces and democratic governance. And because most governments don’t have the policy in place to regulate smart city development — in particular, projects driven by the fast-paced technology sector — this presents a growing global governance concern.

This is where the story usually descends into warnings of smart city dystopia or failure. Loads of recent articles have detailed the science fiction-style city-of-the-future and speculated about the perils of mass data collection, and for good reason — these are important concepts that warrant discussion. It’s time, however, to push past dystopian narratives and explore solutions for the challenges that smart cities present in Toronto and globally…(More)”.

Data-Driven Law: Data Analytics and the New Legal Services


Book by Edward J. Walters: “For increasingly data-savvy clients, lawyers can no longer give “it depends” answers rooted in anecdata. Clients insist that their lawyers justify their reasoning, and with more than a limited set of war stories. The considered judgment of an experienced lawyer is unquestionably valuable. However, on balance, clients would rather have the considered judgment of an experienced lawyer informed by the most relevant information required to answer their questions.

Data-Driven Law: Data Analytics and the New Legal Services helps legal professionals meet the challenges posed by a data-driven approach to delivering legal services. Its chapters are written by leading experts who cover such topics as:

  • Mining legal data
  • Computational law
  • Uncovering bias through the use of Big Data
  • Quantifying the quality of legal services
  • Data mining and decision-making
  • Contract analytics and contract standards

In addition to providing clients with data-based insight, legal firms can track a matter with data from beginning to end, from the marketing spend through to the type of matter, hours spent, billed, and collected, including metrics on profitability and success. Firms can organize and collect documents after a matter and even automate them for reuse. Data on marketing related to a matter can be an amazing source of insight about which practice areas are most profitable….(More)”.

Data Publics: Urban Protest, Analytics and the Courts


Article by Anthony McCosker and Timothy Graham in MC Journal: “There are many examples globally of the use of social media to engage publics in battles over urban development or similar issues (e.g. Fredericks and Foth). Some have asked how social media might be better used by neighborhood organisations to mobilise protest and save historic buildings, cultural landmarks or urban sites (Johnson and Halegoua). And we can only note here the wealth of research literature on social movements, protest and social media. To emphasise Gerbaudo’s point, drawing on Mattoni, we “need to account for how exactly the use of these media reshapes the ‘repertoire of communication’ of contemporary movements and affects the experience of participants” (2). For us, this also means better understanding the role that social data plays in both aiding and reshaping urban protest or arming third sector groups with evidence useful in social institutions such as the courts.

New modes of digital engagement enable forms of distributed digital citizenship, which Meikle sees as the creative political relationships that form through exercising rights and responsibilities. Associated with these practices is the transition from sanctioned, simple discursive forms of social protest in petitions, to new indicators of social engagement in more nuanced social media data and the more interactive forms of online petition platforms like change.org or GetUp (Halpin et al.). These technical forms code publics in specific ways that have implications for contemporary protest action. That is, they provide the operational systems and instructions that shape social actions and relationships for protest purposes (McCosker and Milne).

All protest and social movements are underwritten by explicit or implicit concepts of participatory publics as these are shaped, enhanced, or threatened by communication technologies. But participatory protest publics are uneven, and as Kelty asks: “What about all the people who are neither protesters nor Twitter users? In the broadest possible sense this ‘General Public’ cannot be said to exist as an actual entity, but only as a kind of virtual entity” (27). Kelty is pointing to the porous boundary between a general public and an organised public, or formal enterprise, as a reminder that we cannot take for granted representations of a public, or the public as a given, in relation to Like or follower data for instance.

If carefully gauged, the concept of data publics can be useful. To start with, the notions of publics and publicness are notoriously slippery. Baym and boyd explore the differences between these two terms, and the way social media reconfigures what “public” is. Does a Comment or a Like on a Facebook Page connect an individual sufficiently to an issues-public? As far back as the 1930s, John Dewey was seeking a pragmatic approach to similar questions regarding human association and the pluralistic space of “the public”. For Dewey, “the machine age has so enormously expanded, multiplied, intensified and complicated the scope of the indirect consequences [of human association] that the resultant public cannot identify itself” (157). To what extent, then, can we use data to constitute a public in relation to social protest in the age of data analytics?

There are numerous well formulated approaches to studying publics in relation to social media and social networks. Social network analysis (SNA) determines publics, or communities, through links, ties and clustering, by measuring and mapping those connections and to an extent assuming that they constitute some form of sociality. Networked publics (Ito, 6) are understood as an outcome of social media platforms and practices in the use of new digital media authoring and distribution tools or platforms and the particular actions, relationships or modes of communication they afford, to use James Gibson’s sense of that term. “Publics can be reactors, (re)makers and (re)distributors, engaging in shared culture and knowledge through discourse and social exchange as well as through acts of media reception” (Ito 6). Hashtags, for example, facilitate connectivity and visibility and aid in the formation and “coordination of ad hoc issue publics” (Bruns and Burgess 3). Gray et al., following Ruppert, argue that “data publics are constituted by dynamic, heterogeneous arrangements of actors mobilised around data infrastructures, sometimes figuring as part of them, sometimes emerging as their effect”. The individuals of data publics are neither subjugated by the logics and metrics of digital platforms and data structures, nor simply sovereign agents empowered by the expressive potential of aggregated data (Gray et al.).

Data publics are more than just aggregates of individual data points or connections. They are inherently unstable, dynamic (despite static analysis and visualisations), or vibrant, and ephemeral. We emphasise three key elements of active data publics. First, to be more than an aggregate of individual items, a data public needs to be consequential (in Dewey’s sense of issues or problem-oriented). Second, sufficient connection is visible over time. Third, affective or emotional activity is apparent in relation to events that lend coherence to the public and its prevailing sentiment. To these, we add critical attention to the affordising processes – or the deliberate and incidental effects of datafication and analysis, in the capacities for data collection and processing in order to produce particular analytical outcomes, and the data literacies these require. We return to the latter after elaborating on the Save the Palace case….(More)”.

Countries Can Learn from France’s Plan for Public Interest Data and AI


Nick Wallace at the Center for Data Innovation: “French President Emmanuel Macron recently endorsed a national AI strategy that includes plans for the French state to make public and private sector datasets available for reuse by others in applications of artificial intelligence (AI) that serve the public interest, such as for healthcare or environmental protection. Although this strategy fails to set out how the French government should promote widespread use of AI throughout the economy, it will nevertheless give a boost to AI in some areas, particularly public services. Furthermore, the plan for promoting the wider reuse of datasets, particularly in areas where the government already calls most of the shots, is a practical idea that other countries should consider as they develop their own comprehensive AI strategies.

The French strategy, drafted by mathematician and Member of Parliament Cédric Villani, calls for legislation to mandate repurposing both public and private sector data, including personal data, to enable public-interest uses of AI by government or others, depending on the sensitivity of the data. For example, public health services could use data generated by Internet of Things (IoT) devices to help doctors better treat and diagnose patients. Researchers could use data captured by motorway CCTV to train driverless cars. Energy distributors could manage peaks and troughs in demand using data from smart meters.

Repurposed data held by private companies could be made publicly available, shared with other companies, or processed securely by the public sector, depending on the extent to which sharing the data presents privacy risks or undermines competition. The report suggests that the government would not require companies to share data publicly when doing so would impact legitimate business interests, nor would it require that any personal data be made public. Instead, Dr. Villani argues that, if wider data sharing would do unreasonable damage to a company’s commercial interests, it may be appropriate to only give public authorities access to the data. But where the stakes are lower, companies could be required to share the data more widely, to maximize reuse. Villani rightly argues that it is virtually impossible to come up with generalizable rules for how data should be shared that would work across all sectors. Instead, he argues for a sector-specific approach to determining how and when data should be shared.

After making the case for state-mandated repurposing of data, the report goes on to highlight four key sectors as priorities: health, transport, the environment, and defense. Since these all have clear implications for the public interest, France can create national laws authorizing extensive repurposing of personal data without violating the General Data Protection Regulation (GDPR) which allows national laws that permit the repurposing of personal data where it serves the public interest. The French strategy is the first clear effort by an EU member state to proactively use this clause in aid of national efforts to bolster AI….(More)”.

Knowledge, Policymaking and Learning for European Cities and Regions: From Research to Practice


Knowledge, Policymaking and Learning for European Cities and Regions

Book edited by Nicola Francesco Dotti: “This book provides theories, experiences, reflections and future directions for social scientists who wish to engage with policy-oriented research in, and for, cities and regions. The ‘policy learning’ perspective is comprehensively discussed, focusing on actors promoting ‘policy knowledge’ and interaction among different stakeholders.

Theoretical frameworks and practical experiences of policy-orientated research for European regions and cities are comprehensively explored in this timely book. The authors review current theories and present novel case studies of policy-orientated research. By combining policy analysis with urban and regional studies, the book highlights how researchers can be agents of policy learning, helping policymakers to learn how to learn.

This book offers unique, real world insights for researchers, practitioners and stakeholders interested in research-based approaches to cities and regions….(More)”

Most Public Engagement is Worthless


Charles Marohn at Strong Towns: “…Our thinking is a byproduct of the questions we ask. …I’m a planner and I’m a policy nerd. I had all the training in how to hold a public meeting and solicit feedback through SWOT (strengths, weaknesses, opportunities, threats) questions. I’ve been taught how to reach out to marginalized groups and make sure they too have a voice in the process. That is, so long as that voice fit into the paradigm of a planner and a policy nerd. Or so long as I could make it fit.

Modern Planner: What percentage of the city budget should we spend on parks?

Steve Jobs: Do you use the park?

Our planning efforts should absolutely be guided by the experiences of real people. But their actions are the data we should be collecting, not their stated preferences. To do the latter is to get comfortable trying to build a better Walkman.  We should be designing the city equivalent of the iPod: something that responds to how real people actually live. It’s a messier and less affirming undertaking.

I’ve come to the point in my life where I think municipal comprehensive planning is worthless. More often than not, it is a mechanism to wrap a veneer of legitimacy around the large policy objectives of influential people. Most cities would be better off putting together a good vision statement and a set of guiding principles for making decisions, then getting on with it.

That is, get on with the hard work of iteratively building a successful city. That work is a simple, four-step process:

  1. Humbly observe where people in the community struggle.
  2. Ask the question: What is the next smallest thing we can do right now to address that struggle?
  3. Do that thing. Do it right now.
  4. Repeat.

It’s challenging to be humble, especially when you are in a position, or are part of a profession, whose internal narrative tells you that you already knowwhat to do. It’s painful to observe, especially when that means confronting messy realities that do not fit with your view of the world. It’s unsatisfying, at times, to try many small things when the “obvious” fix is right there. If only those around you just shared your “courage” to undertake it (of course, with no downside to you if you’re wrong). If only people had the patience to see it through (while they, not you, continue to struggle in the interim).

Yet what if we humbly observe where people in our community struggle—if we use the experiences of others as our data—and we continually take the actions we are capable of taking, right now, to alleviate those struggles? And what if we do this in neighborhood after neighborhood across the entire city, month after month and year after year? If we do that, not only will we make the lowest risk, highest returning public investments it is possible to make, we won’t help but improve people’s lives in the process….(More)”.

To the smart city and beyond? Developing a typology of smart urban innovation


Maja Nilssen in Technological Forecasting and Social Change: “The smart city is an increasingly popular topic in urban development, arousing both excitement and skepticism. However, despite increasing enthusiasm regarding the smartness of cities, the concept is still regarded as somewhat evasive. Encouraged by the multifaceted character of the concept, this article examines how we can categorize the different dimensions often included in the smart city concept, and how these dimensions are coupled to innovation. Furthermore, the article examines the implications of the different understandings of the smart city concept for cities’ abilities to be innovative.

Building on existing scholarly contributions on the smartness of cities and innovation literature, the article develops a typology of smart city initiatives based on the extent and types of innovations they involve. The typology is structured as a smart city continuum, comprising four dimensions of innovation: (1) technological, (2) organizational, (3) collaborative, (4) experimental.

The smart city continuum is then utilized to analyze empirical data from a Norwegian urban development project triggered by a critical juncture. The empirical data shows that the case holds elements of different dimensions of the continuum, supporting the need for a typology of smart cities as multifaceted urban innovation. The continuum can be used as an analytical model for different types of smart city initiatives, and thus shed light on what types of innovation are central in the smart city. Consequently, the article offers useful insights for both practitioners and scholars interested in smart city initiatives….(More)”

Programmers need ethics when designing the technologies that influence people’s lives


Cherri M. Pancake at The Conversation: “Computing professionals are on the front lines of almost every aspect of the modern world. They’re involved in the response when hackers steal the personal information of hundreds of thousands of people from a large corporation. Their work can protect – or jeopardize – critical infrastructure like electrical grids and transportation lines. And the algorithms they write may determine who gets a job, who is approved for a bank loan or who gets released on bail.

Technological professionals are the first, and last, lines of defense against the misuse of technology. Nobody else understands the systems as well, and nobody else is in a position to protect specific data elements or ensure the connections between one component and another are appropriate, safe and reliable. As the role of computing continues its decades-long expansion in society, computer scientists are central to what happens next.

That’s why the world’s largest organization of computer scientists and engineers, the Association for Computing Machinery, of which I am president, has issued a new code of ethics for computing professionals. And it’s why ACM is taking other steps to help technologists engage with ethical questions….

ACM’s new ethics code has several important differences from the 1992 version. One has to do with unintended consequences. In the 1970s and 1980s, technologists built software or systems whose effects were limited to specific locations or circumstances. But over the past two decades, it has become clear that as technologies evolve, they can be applied in contexts very different from the original intent.

For example, computer vision research has led to ways of creating 3D models of objects – and people – based on 2D images, but it was never intended to be used in conjunction with machine learning in surveillance or drone applications. The old ethics code asked software developers to be sure a program would actually do what they said it would. The new version also exhorts developers to explicitly evaluate their work to identify potentially harmful side effects or potential for misuse.

Another example has to do with human interaction. In 1992, most software was being developed by trained programmers to run operating systems, databases and other basic computing functions. Today, many applications rely on user interfaces to interact directly with a potentially vast number of people. The updated code of ethics includes more detailed considerations about the needs and sensitivities of very diverse potential users – including discussing discrimination, exclusion and harassment….(More)”.

How Taiwan’s online democracy may show future of humans and machines


Shuyang Lin at the Sydney Morning Herald: “Taiwanese citizens have spent the past 30 years prototyping future democracy since the lift of martial law in 1987. Public participation in Taiwan has been developed in several formats, from face-to-face to deliberation over the internet. This trajectory coincides with the advancement of technology, and as new tools arrived, democracy evolved.

The launch of vTaiwan (v for virtual, vote, voice and verb), an experiment that prototypes an open consultation process for the civil society, showed that by using technology creatively humanity can facilitate deep and fair conversations, form collective consensus, and deliver solutions we can all live with.

It is a prototype that helps us envision what future democracy could look like….

Decision-making is not an easy task, especially when it has to do with a larger group of people. Group decision-making could take several protocols, such as mandate, to decide and take questions; advise, to listen before decisions; consent, to decide if no one objects; and consensus, to decide if everyone agrees. So there is a pressing need for us to be able to collaborate together in a large scale decision-making process to update outdated standards and regulations.

The future of human knowledge is on the web. Technology can help us to learn, communicate, and make better decisions faster with larger scale. The internet could be the facilitation and AI could be the catalyst. It is extremely important to be aware that decision-making is not a one-off interaction. The most important direction of decision-making technology development is to have it allow humans to be engaged in the process anytime and also have an invitation to request and submit changes.

Humans have started working with computers, and we will continue to work with them. They will help us in the decision-making process and some will even make decisions for us; the actors in collaboration don’t necessarily need to be just humans. While it is up to us to decide what and when to opt in or opt out, we should work together with computers in a transparent, collaborative and inclusive space.

Where shall we go as a society? What do we want from technology? As Audrey Tang,  Digital Minister without Portfolio of Taiwan, puts it: “Deliberation — listening to each other deeply, thinking together and working out something that we can all live with — is magical.”…(More)”.

Introducing the (World’s First) Ethical Operating System


Article by Paula Goldman and Raina Kumra: “Is it possible for tech developers to anticipate future risks? Or are these future risks so unknowable to us here in the present that, try as we might to make our tech safe, continued exposure to risks is simply the cost of engagement?

 Today, in collaboration with Institute for the Future (IFTF), a leading non-profit strategic futures organization, Omidyar Network is excited to introduce the Ethical Operating System (or Ethical OS for short), a toolkit for helping developers and designers anticipate the future impact of technologies they’re working on today. We designed the Ethical OS to facilitate better product development, faster deployment, and more impactful innovation — all while striving to minimize technical and reputational risks. The hope is that, with the Ethical OS in hand, technologists can begin to build responsibility into core business and product decisions, and contribute to a thriving tech industry.

The Ethical OS is already being piloted by nearly 20 tech companies, schools, and startups, including Mozilla and Techstars. We believe it can better equip technologists to grapple with three of the most pressing issues facing our community today:

    • If the technology you’re building right now will someday be used in unexpected ways, how can you hope to be prepared?

 

    • What new categories of risk should you pay special attention to right now?

 

  • Which design, team, or business model choices can actively safeguard users, communities, society, and your company from future risk?

As large sections of the public grow weary of a seemingly constant stream of data safety and security issues, and with growing calls for heightened government intervention and oversight, the time is now for the tech community to get this right.

We created the Ethical OS as a pilot to help make ethical thinking and future risk mitigation integral components of all design and development processes. It’s not going to be easy. The industry has far more work to do, both inside individual companies and collectively. But with our toolkit as a guide, developers will have a practical means of helping to begin working to ensure their tech is as good as their intentions…(More)”.