Selling Smartness: Corporate Narratives and the Smart City as a Sociotechnical Imaginary


Jathan Sadowski and Roy Bendor in Science, Technology and Human Values: “This article argues for engaging with the smart city as a sociotechnical imaginary. By conducting a close reading of primary source material produced by the companies IBM and Cisco over a decade of work on smart urbanism, we argue that the smart city imaginary is premised in a particular narrative about urban crises and technological salvation. This narrative serves three main purposes: (1) it fits different ideas and initiatives into a coherent view of smart urbanism, (2) it sells and disseminates this version of smartness, and (3) it crowds out alternative visions and corresponding arrangements of smart urbanism.

Furthermore, we argue that IBM and Cisco construct smart urbanism as both a reactionary and visionary force, plotting a model of the near future, but one that largely reflects and reinforces existing sociopolitical systems. We conclude by suggesting that breaking IBM’s and Cisco’s discursive dominance over the smart city imaginary requires us to reimagine what smart urbanism means and create counter-narratives that open up space for alternative values, designs, and models….(More)”.

It’s time for a Bill of Data Rights


Article by Martin Tisne: “…The proliferation of data in recent decades has led some reformers to a rallying cry: “You own your data!” Eric Posner of the University of Chicago, Eric Weyl of Microsoft Research, and virtual-reality guru Jaron Lanier, among others, argue that data should be treated as a possession. Mark Zuckerberg, the founder and head of Facebook, says so as well. Facebook now says that you “own all of the contact and information you post on Facebook” and “can control how it is shared.” The Financial Times argues that “a key part of the answer lies in giving consumers ownership of their own personal data.” In a recent speech, Tim Cook, Apple’s CEO, agreed, saying, “Companies should recognize that data belongs to users.”

This essay argues that “data ownership” is a flawed, counterproductive way of thinking about data. It not only does not fix existing problems; it creates new ones. Instead, we need a framework that gives people rights to stipulate how their data is used without requiring them to take ownership of it themselves….

The notion of “ownership” is appealing because it suggests giving you power and control over your data. But owning and “renting” out data is a bad analogy. Control over how particular bits of data are used is only one problem among many. The real questions are questions about how data shapes society and individuals. Rachel’s story will show us why data rights are important and how they might work to protect not just Rachel as an individual, but society as a whole.

Tomorrow never knows

To see why data ownership is a flawed concept, first think about this article you’re reading. The very act of opening it on an electronic device created data—an entry in your browser’s history, cookies the website sent to your browser, an entry in the website’s server log to record a visit from your IP address. It’s virtually impossible to do anything online—reading, shopping, or even just going somewhere with an internet-connected phone in your pocket—without leaving a “digital shadow” behind. These shadows cannot be owned—the way you own, say, a bicycle—any more than can the ephemeral patches of shade that follow you around on sunny days.

Your data on its own is not very useful to a marketer or an insurer. Analyzed in conjunction with similar data from thousands of other people, however, it feeds algorithms and bucketizes you (e.g., “heavy smoker with a drink habit” or “healthy runner, always on time”). If an algorithm is unfair—if, for example, it wrongly classifies you as a health risk because it was trained on a skewed data set or simply because you’re an outlier—then letting you “own” your data won’t make it fair. The only way to avoid being affected by the algorithm would be to never, ever give anyone access to your data. But even if you tried to hoard data that pertains to you, corporations and governments with access to large amounts of data about other people could use that data to make inferences about you. Data is not a neutral impression of reality. The creation and consumption of data reflects how power is distributed in society. …(More)”.

Seven design principles for using blockchain for social impact


Stefaan Verhulst at Apolitical: “2018 will probably be remembered as the bust of the blockchain hype. Yet even as crypto currencies continue to sink in value and popular interest, the potential of using blockchain technologies to achieve social ends remains important to consider but poorly understood.

In 2019, business will continue to explore blockchain for sectors as disparate as finance, agriculture, logistics and healthcare. Policymakers and social innovators should also leverage 2019 to become more sophisticated about blockchain’s real promise, limitations  and current practice.

In a recent report I prepared with Andrew Young, with the support of the Rockefeller Foundation, we looked at the potential risks and challenges of using blockchain for social change — or “Blockchan.ge.” A number of implementations and platforms are already demonstrating potential social impact.

The technology is now being used to address issues as varied as homelessness in New York City, the Rohingya crisis in Myanmar and government corruption around the world.

In an illustration of the breadth of current experimentation, Stanford’s Center for Social Innovation recently analysed and mapped nearly 200 organisations and projects trying to create positive social change using blockchain. Likewise, the GovLab is developing a mapping of blockchange implementations across regions and topic areas; it currently contains 60 entries.

All these examples provide impressive — and hopeful — proof of concept. Yet despite the very clear potential of blockchain, there has been little systematic analysis. For what types of social impact is it best suited? Under what conditions is it most likely to lead to real social change? What challenges does blockchain face, what risks does it pose and how should these be confronted and mitigated?

These are just some of the questions our report, which builds its analysis on 10 case studies assembled through original research, seeks to address.

While the report is focused on identity management, it contains a number of lessons and insights that are applicable more generally to the subject of blockchange.

In particular, it contains seven design principles that can guide individuals or organisations considering the use of blockchain for social impact. We call these the Genesis principles, and they are outlined at the end of this article…(More)”.

Too Many Secrets? When Should the Intelligence Community be Allowed to Keep Secrets?


Ross W. Bellaby in Polity: “In recent years, revelations regarding reports of torture by the U.S. Central Intelligence Agency and the quiet growth of the National Security Agency’s pervasive cyber-surveillance system have brought into doubt the level of trust afforded to the intelligence community. The question of its trustworthiness requires determining how much secrecy it should enjoy and what mechanisms should be employed to detect and prevent future abuse. My argument is not a call for complete transparency, however, as secret intelligence does play an important and ethical role in society. Rather, I argue that existing systems built on a prioritization of democratic assumptions are fundamentally ill-equipped for dealing with the particular challenge of intelligence secrecy. As the necessary circle of secrecy is extended, political actors are insulated from the very public gaze that ensures they are working in line with the political community’s best interests. Therefore, a new framework needs to be developed, one that this article argues should be based on the just war tradition, where the principles of just cause, legitimate authority, last resort, proportionality, and discrimination are able to balance the secrecy that the intelligence community needs in order to detect and prevent threats with the harm that too much or incorrect secrecy can cause to people….(More)”.

Data scores


Data-scores.org: “Data scores that combine data from a variety of both online and offline activities are becoming a way to categorize citizens, allocating services, and predicting future behavior. Yet little is still known about the implementation of data-driven systems and algorithmic processes in public services and how citizens are increasingly ‘scored’ based on the collection and combination of data.

As part of our project ‘Data Scores as Governance’ we have developed a tool to map and investigate the uses of data analytics and algorithms in public services in the UK. This tool is designed to facilitate further research and investigation into this topic and to advance public knowledge and understanding.

The tool is made up of a collection of documents from different sources that can be searched and mapped according to different categories. The database consists of more than 5300 unverified documents that have been scraped based on a number of search terms relating to data systems in government. This is an incomplete and on-going data-set. You can read more in our Methodology section….(More)”.

A Grand Challenges-Based Research Agenda for Scholarly Communication and Information Science


Report by Micah Altman and Chris Bourg: “…The overarching question these problems pose is how to create a global scholarly knowledge ecosystem that supports participation, ensures agency, equitable access, trustworthiness, integrity, and is legally, economically, institutionally, technically, and socially sustainable. The aim of the Grand Challenges Summit and this report is to identify broad research areas and questions to be explored in order to provide an evidence base from which to answer specific aspects of that broad question.

Reaching this future state requires exploring a set of interrelated anthropological, behavioral, computational, economic, legal, policy, organizational, sociological, and technological areas. The extent of these areas of research is illustrated by the following exemplars:

What is necessary to develop coherent, comprehensive, and empirically testable theories of the value of scholarly knowledge to society? What is the best current evidence of this value, and what does it elide? How should the measures of use and utility of scholarly outputs be adapted for different communities of use, disciplines, theories, and cultures? What methods will improve our predictions of the future value of collections of information, or enable the selection and construction of collections that will be likely to be of value in the future?…

What parts of the scholarly knowledge ecosystem promote the values of transparency, individual agency, participation, accountability, and fairness? How can these values be reflected in the algorithms, information architecture, and technological systems supporting the scholarly knowledge ecosystem? What principles of design and governance would be effective for embedding these values?…

The list above provides a partial outline of research areas that will need to be addressed in order to overcome the major barriers to a better future for scholarly communication and information science. As the field progresses in exploring these areas, and attempting to address the barriers is discussed, new areas are likely to be identified. Even within this initial list of research areas, there are many pressing questions ripe for exploration….

Research on open scholarship solutions is needed to assess the scale and breadth of access,[68] the costs to actors and stakeholders at all levels, and the effects of openness on perceptions of trust and confidence in research and research organizations. Research is also needed in the intersection between open scholarship and participation, new forms of scholarship, information integrity, information durability, and information agency (see section 3.1.). This will require an assessment of the costs and returns of open scholarship at a systemic level, rather than at the level of individual institutions or actors. We also need to assess whether and under what conditions interventions directed at removing reputation and institutional barriers to collaboration promote open scholarship. Research is likewise required to document the conditions under which open scholarship reduces duplication and inefficiency, and promotes equity in the creation and use of knowledge. In addition, research should address the permeability of open scholarship systems to researchers across multiple scientific fields, and whether—and under what conditions—open scholarship enhances interdisciplinary collaboration….(More)”.

Draft Ethics guidelines for trustworthy AI


Working document by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG): “…Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.

Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.

Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

These Guidelines therefore set out a framework for Trustworthy AI:

  • Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
  • From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.
  • Chapter III subsequently operationalises the requirements by providing a concrete but nonexhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases. …(More)”

A People’s Guide to AI


Booklet by Mimi Onuoha and Diana Nucera: “..this booklet aims to fill the gaps in information about AI by creating accessible materials that inform communities and allow them to identify what their ideal futures with AI can look like. Although the contents of this booklet focus on demystifying AI, we find it important to state that the benefits of any technology should be felt by all of us. Too often, the challenges presented by new technology spell out yet another tale of racism, sexism, gender inequality, ableism, and lack of consent within digital culture.

The path to a fair future starts with the humans behind the machines, not the machines themselves. Self-reflection and a radical transformation of our relationships to our environment and each other are at the heart of combating structural inequality. But understanding what it takes to create a fair and just society is the first step. In creating this booklet, we start from the belief that equity begins with education…For those who wish to learn more about specific topics, we recommend looking at the table of contents and choosing sections to read. For more hands-on learners, we have also included a number of workbook activities that allow the material to be explored in a more active fashion.

We hope that this booklet inspires and informs those who are developing emerging technologies to reflect on how these technologies can impact our societies. We also hope that this booklet inspires and informs black, brown, indigenous, and immigrant communities to reclaim technology as a tool of liberation…(More)”.

Advancing Sustainability Together: Launching new report on citizen-generated data and its relevance for the SDGs


Danny Lämmerhirt at Open Knowledge Foundation: “Citizen-generated data (CGD) expands what gets measured, how, and for what purpose. As the collection and engagement with CGD increases in relevance and visibility, public institutions can learn from existing initiatives about what CGD initiatives do, how they enable different forms of sense-making and how this may further progress around the Sustainable Development Goals.

Our report, as well as a guide for governments (find the layouted version here, as well as a living document here) shall help start conversations around the different approaches of doing and organising CGD. When CGD becomes good enough depends on the purpose it is used for but also how CGD is situated in relation to other data.

As our work wishes to be illustrative rather than comprehensive, we started with a list of over 230 projects that were associated with the term “citizen-generated data” on Google Search, using an approach known as “search as research” (Rogers, 2013). Outgoing from this list, we developed case studies on a range of prominent CGD examples.

The report identifies several benefits CGD can bring for implementing and monitoring the SDGs, underlining the importance for public institutions to further support these initiatives.

Figure 1: Illustration of tasks underpinning CGD initiatives and their workflows

Key findings:

  • Dealing with data is usually much more than ‘just producing’ data. CGD initiativesopen up new types of relationships between individuals, civil society and public institutions. This includes local development and educational programmes, community outreach, and collaborative strategies for monitoring, auditing, planning and decision-making.
  • Generating data takes many shapes, from collecting new data in the field, to compiling, annotating, and structuring existing data to enable new ways of seeing things through data. Accessing and working with existing (government) data is often an important enabling condition for CGD initiatives to start in the first place.
  • CGD initiatives can help gathering data in regions otherwise not reachable. Some CGD approaches may provide updated and detailed data at lower costs and faster than official data collections.
  • Beyond filling data gaps, official measurements can be expanded, complemented, or cross-verified. This includes pattern and trend identification and the creation of baseline indicators for further research. CGD can help governments detect anomalies, test the accuracy of existing monitoring processes, understand the context around phenomena, and initiate its own follow-up data collections.
  • CGD can inform several actions to achieve the SDGs. Beyond education, community engagement and community-based problem solving, this includes baseline research, planning and strategy development, allocation and coordination of public and private programs, as well as improvement to public services.
  • CGD must be ‘good enough’ for different (and varying) purposes. Governments already develop pragmatic ways to negotiate and assess the usefulness of data for a specific task. CGD may be particularly useful when agencies have a clear remit or responsibility to manage a problem.  
  • Data quality can be comparable to official data collections, provided tasks are sufficiently easy to conduct, tool quality is high enough, and sufficient training, resources and quality assurance are provided….(More)”.

Creating value through data collaboratives


Paper by  Klievink, Bram, van der Voort, Haiko and Veeneman, Wijnand: “Driven by the technological capabilities that ICTs offer, data enable new ways to generate value for both society and the parties that own or offer the data. This article looks at the idea of data collaboratives as a form of cross-sector partnership to exchange and integrate data and data use to generate public value. The concept thereby bridges data-driven value creation and collaboration, both current themes in the field.

To understand how data collaboratives can add value in a public governance context, we exploratively studied the qualitative longitudinal case of an infomobility platform. We investigated the ability of a data collaborative to produce results while facing significant challenges and tensions between the goals of parties, each having the conflicting objectives of simultaneously retaining control whilst allowing for generativity. Taken together, the literature and case study findings help us to understand the emergence and viability of data collaboratives. Although limited by this study’s explorative nature, we find that conditions such as prior history of collaboration and supportive rules of the game are key to the emergence of collaboration. Positive feedback between trust and the collaboration process can institutionalise the collaborative, which helps it survive if conditions change for the worse….(More)”.