Understanding the smart city Domain: A Literature Review


Paper by Leonidas G. Anthopoulos: “Smart Cities appeared in literature in late ‘90s and various approaches have been developed so far. Until today, smart city does not describe a city with particular attributes but it is used to describe different cases in urban spaces: web portals that virtualize cities or city guides; knowledge bases that address local needs; agglomerations with Information and Communication Technology (ICT) infrastructure that attract business relocation; metropolitan-wide ICT infrastructures that deliver e-services to the citizens; ubiquitous environments; and recently ICT infrastructure for ecological use. Researchers, practicians, businessmen and policy makers consider smart city from different perspectives and most of them agree on a model that measures urban economy, mobility, environment, living, people and governance. On the other hand, ICT and construction industries stress to capitalize smart city and a new market seems to be generated in this domain. This chapter aims to perform a literature review, discover and classify the particular schools of thought, universities and research centres as well as companies that deal with smart city domain and discover alternative approaches, models, architecture and frameworks with this regard….(More)

How does collaborative governance scale?


Paper by Ansell, Chris; and Torfing, Jacob in Policy & Politics: “Scale is an overlooked issue in the literature on interactive governance. This special issue investigates the challenges posed by the scale and scaling of network and collaborative forms of governance. Our original motivation arose from a concern about whether collaborative governance can scale up. As we learned more, our inquiry expanded to include the tensions inherent in collaboration across scales or at multiple scales and the issue of dynamically scaling collaboration to adapt to changing problems and demands. The diverse cases in this special issue explore these challenges in a range of concrete empirical domains than span the globe…(More)”

The Data Revolution


Review of Rob Kitchin’s The Data Revolution: Big Data, Open Data, Data Infrastructures & their Consequences by David Moats in Theory, Culture and Society: “…As an industry, academia is not immune to cycles of hype and fashion. Terms like ‘postmodernism’, ‘globalisation’, and ‘new media’ have each had their turn filling the top line of funding proposals. Although they are each grounded in tangible shifts, these terms become stretched and fudged to the point of becoming almost meaningless. Yet, they elicit strong, polarised reactions. For at least the past few years, ‘big data’ seems to be the buzzword, which elicits funding, as well as the ire of many in the social sciences and humanities.

Rob Kitchin’s book The Data Revolution is one of the first systematic attempts to strip back the hype surrounding our current data deluge and take stock of what is really going on. This is crucial because this hype is underpinned by very real societal change, threats to personal privacy and shifts in store for research methods. The book acts as a helpful wayfinding device in an unfamiliar terrain, which is still being reshaped, and is admirably written in a language relevant to social scientists, comprehensible to policy makers and accessible even to the less tech savvy among us.

The Data Revolution seems to present itself as the definitive account of this phenomena but in filling this role ends up adopting a somewhat diplomatic posture. Kitchin takes all the correct and reasonable stances on the matter and advocates all the right courses of action but he is not able to, in the context of this book, pursue these propositions fully. This review will attempt to tease out some of these latent potentials and how they might be pushed in future work, in particular the implications of the ‘performative’ character of both big data narratives and data infrastructures for social science research.

Kitchin’s book starts with the observation that ‘data’ is a misnomer – etymologically data should refer to phenomena in the world which can be abstracted, measured etc. as opposed to the representations and measurements themselves, which should by all rights be called ‘capta’. This is ironic because the worst offenders in what Kitchin calls “data boosterism” seem to conflate data with ‘reality’, unmooring data from its conditions of production and making relationship between the two given or natural.

As Kitchin notes, following Bowker (2005), ‘raw data’ is an oxymoron: data are not so much mined as produced and are necessarily framed technically, ethically, temporally, spatially and philosophically. This is the central thesis of the book, that data and data infrastructures are not neutral and technical but also social and political phenomena. For those at the critical end of research with data, this is a starting assumption, but one which not enough practitioners heed. Most of the book is thus an attempt to flesh out these rapidly expanding data infrastructures and their politics….

Kitchin is at his best when revealing the gap between the narratives and the reality of data analysis such as the fallacy of empiricism – the assertion that, given the granularity and completeness of big data sets and the availability of machine learning algorithms which identify patterns within data (with or without the supervision of human coders), data can “speak for themselves”. Kitchin reminds us that no data set is complete and even these out-of-the-box algorithms are underpinned by theories and assumptions in their creation, and require context specific knowledge to unpack their findings. Kitchin also rightly raises concerns about the limits of big data, that access and interoperability of data is not given and that these gaps and silences are also patterned (Twitter is biased as a sample towards middle class, white, tech savy people). Yet, this language of veracity and reliability seems to suggest that big data is being conceptualised in relation to traditional surveys, or that our population is still the nation state, when big data could helpfully force us to reimagine our analytic objects and truth conditions and more pressingly, our ethics (Rieder, 2013).

However, performativity may again complicate things. As Kitchin observes, supermarket loyalty cards do not just create data about shopping, they encourage particular sorts of shopping; when research subjects change their behaviour to cater to the metrics and surveillance apparatuses built into platforms like Facebook (Bucher, 2012), then these are no longer just data points representing the social, but partially constitutive of new forms of sociality (this is also true of other types of data as discussed by Savage (2010), but in perhaps less obvious ways). This might have implications for how we interpret data, the distribution between quantitative and qualitative approaches (Latour et al., 2012) or even more radical experiments (Wilkie et al., 2014). Kitchin is relatively cautious about proposing these sorts of possibilities, which is not the remit of the book, though it clearly leaves the door open…(More)”

Who knew contracts could be so interesting?


 at Transparency International UK: “…Despite the UK Government’s lack of progress, it wouldn’t be completely unreasonable to ask “who actually publishes these things, anyway?” Well, back in 2011, when the UK Government committed to publish all new contracts and tenders over £10,000 in value, the Slovakian Government decided to publish more or less everything. Faced by mass protests over corruption in the public sector, their government committed to publishing almost all public sector contracts online (there are some exemptions). You can now browse through the details of a significant amount of government business via the country’s online portal (so long as you can read Slovak, of course).

Who actually reads these things?

According to research by Transparency International Slovakia, at least 11% of the Slovakian adult population have looked at a government contract since they were first published back in 2011. That’s around 480,000 people. Although some of these spent more time than others browsing through the documents in-depth, this is undeniably an astounding amount of people taking a vague interest in government procurement.

Why does this matter?

Before Slovakia opened-up its contracts there was widespread mistrust in public institutions and officials. According to Transparency International’s global Corruption Perceptions Index, which measures impressions of public sector corruption, Slovakia was ranked 66th out of 183 countries in 2011. By 2014 it had jumped 12 places – a record achievement – to 54th, which must in some part be due to the Government’s commitment to opening-up public contracts to greater scrutiny.

Since the contracts were published, there also seems to have been a spike in media reports on government tenders. This suggests there is greater scrutiny of public spending, which should hopefully translate into less wasted expenditure.

Elsewhere, proponents of open contracting have espoused other benefits, such as greater commitment by both parties to following the agreement and protecting against malign private interests. Similar projects inGeorgia have also turned clunky bureaucracies into efficient, data-savvy administrations. In short, there are quite a few reasons why more openness in public sector procurement is a good thing.

Despite these benefits, opponents cite a number of downsides, including the administrative costs of publishing contracts online and issues surrounding commercially sensitive information. However, TI Slovakia’s research suggests the former is minimal – and presumably preferable to rooting around through paper mountains every time a Freedom of Information (FOI) request is received about a contract – whilst the latter already has to be disclosed under the FOI Act except in particular circumstances…(More)”

Modernizing Informed Consent: Expanding the Boundaries of Materiality


Paper by Nadia N. Sawicki: “Informed consent law’s emphasis on the disclosure of purely medical information – such as diagnosis, prognosis, and the risks and benefits of various treatment alternatives – does not accurately reflect modern understandings of how patients make medical decisions. Existing common law disclosure duties fail to capture a variety of non-medical factors relevant to patients, including information about the physician’s personal characteristics; the cost of treatment; the social implications of various health care interventions; and the legal consequences associated with diagnosis and treatment. Although there is a wealth of literature analyzing the merits of such disclosures in a few narrow contexts, there is little broader discussion and no consensus about whether there the doctrine of informed consent should be expanded to include information that may be relevant to patients but falls outside the traditional scope of medical materiality. This article seeks to fill that gap.
I offer a normative argument for expanding the scope of informed consent disclosure to include non-medical information that is within the physician’s knowledge and expertise, where the information would be material to the reasonable patient and its disclosure does not violate public policy. This proposal would result in a set of disclosure requirements quite different from the ones set by modern common law and legislation. In many ways, the range of required disclosures may become broader, particularly with respect to physician-specific information about qualifications, health status, and financial conflicts of interests. However, some disclosures that are currently required by statute (or have been proposed by commentators) would fall outside the scope of informed consent – most notably, information about support resources available in the abortion context; about the social, ethical, and legal implications of treatment; and about health care costs….(More)”

Improving Crowdsourcing and Citizen Science as a Policy Mechanism for NASA


Paper by Balcom Brittany: “This article examines citizen science projects, defined as “a form of open collaboration where members of the public participate in the scientific process, including identifying research questions, collecting and analyzing the data, interpreting the results, and problem solving,” as an effective and innovative tool for National Aeronautics and Space Administration (NASA) science in line with the Obama Administration’s Open Government Directive. Citizen science projects allow volunteers with no technical training to participate in analysis of large sets of data that would otherwise constitute prohibitively tedious and lengthy work for research scientists. Zooniverse.com hosts a multitude of popular space-focused citizen science projects, many of which have been extraordinarily successful and have enabled new research publications and major discoveries. This article takes a multifaceted look at such projects by examining the benefits of citizen science, effective game design, and current desktop computer and mobile device usage trends. It offers suggestions of potential research topics to be studied with emerging technologies, policy considerations, and opportunities for outreach. This analysis includes an overview of other crowdsourced research methods such as distributed computing and contests. New research and data analysis of mobile phone usage, scientific curiosity, and political engagement among Zooniverse.com project participants has been conducted for this study…(More)”

A computational algorithm for fact-checking


Kurzweil News: “Computers can now do fact-checking for any body of knowledge, according to Indiana University network scientists, writing in an open-access paper published June 17 in PLoS ONE.

Using factual information from summary infoboxes from Wikipedia* as a source, they built a “knowledge graph” with 3 million concepts and 23 million links between them. A link between two concepts in the graph can be read as a simple factual statement, such as “Socrates is a person” or “Paris is the capital of France.”

In the first use of this method, IU scientists created a simple computational fact-checker that assigns “truth scores” to statements concerning history, geography and entertainment, as well as random statements drawn from the text of Wikipedia. In multiple experiments, the automated system consistently matched the assessment of human fact-checkers in terms of the humans’ certitude about the accuracy of these statements.

Dealing with misinformation and disinformation

In what the IU scientists describe as an “automatic game of trivia,” the team applied their algorithm to answer simple questions related to geography, history, and entertainment, including statements that matched states or nations with their capitals, presidents with their spouses, and Oscar-winning film directors with the movie for which they won the Best Picture awards. The majority of tests returned highly accurate truth scores.

Lastly, the scientists used the algorithm to fact-check excerpts from the main text of Wikipedia, which were previously labeled by human fact-checkers as true or false, and found a positive correlation between the truth scores produced by the algorithm and the answers provided by the fact-checkers.

Significantly, the IU team found their computational method could even assess the truthfulness of statements about information not directly contained in the infoboxes. For example, the fact that Steve Tesich — the Serbian-American screenwriter of the classic Hoosier film “Breaking Away” — graduated from IU, despite the information not being specifically addressed in the infobox about him.

Using multiple sources to improve accuracy and richness of data

“The measurement of the truthfulness of statements appears to rely strongly on indirect connections, or ‘paths,’ between concepts,” said Giovanni Luca Ciampaglia, a postdoctoral fellow at the Center for Complex Networks and Systems Research in the IU Bloomington School of Informatics and Computing, who led the study….

“These results are encouraging and exciting. We live in an age of information overload, including abundant misinformation, unsubstantiated rumors and conspiracy theories whose volume threatens to overwhelm journalists and the public. Our experiments point to methods to abstract the vital and complex human task of fact-checking into a network analysis problem, which is easy to solve computationally.”

Expanding the knowledge base

Although the experiments were conducted using Wikipedia, the IU team’s method does not assume any particular source of knowledge. The scientists aim to conduct additional experiments using knowledge graphs built from other sources of human knowledge, such as Freebase, the open-knowledge base built by Google, and note that multiple information sources could be used together to account for different belief systems….(More)”

Beating the news’ with EMBERS: Forecasting Civil Unrest using Open Source Indicators


Paper by Naren Ramakrishnan et al: “We describe the design, implementation, and evaluation of EMBERS, an automated, 24×7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the uptick and downtick of incidents during the June 2013 protests in Brazil. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings….(More)”

Government data does not mean data governance: Lessons learned from a public sector application audit


Paper by Nik ThompsonRavi Ravindran, and Salvatore Nicosia: “Public sector agencies routinely store large volumes of information about individuals in the community. The storage and analysis of this information benefits society, as it enables relevant agencies to make better informed decisions and to address the individual’s needs more appropriately. Members of the public often assume that the authorities are well equipped to handle personal data; however, due to implementation errors and lack of data governance, this is not always the case. This paper reports on an audit conducted in Western Australia, focusing on findings in the Police Firearms Management System and the Department of Health Information System. In the case of the Police, the audit revealed numerous data protection issues leading the auditors to report that they had no confidence in the accuracy of information on the number of people licensed to possess firearms or the number of licensed firearms. Similarly alarming conclusions were drawn in the Department of Health as auditors found that they could not determine which medical staff member was responsible for clinical data entries made. The paper describes how these issues often do not arise from existing business rules or the technology itself, but a lack of sound data governance. Finally, a discussion section presents key data governance principles and best practices that may guide practitioners involved in data management. These cases highlight the very real data management concerns, and the associated recommendations provide the context to spark further interest in the applied aspects of data protection….(More)”

 

Architecting Transparency: Back to the Roots – and Forward to the Future?


Paper by Dieter Zinnbauer: “Where to go next in research and practice on information disclosure and institutional transparency? Where to learn and draw inspiration from? How about if we go back to the roots and embrace an original, material notion of transparency as the quality of a substance or element to be see-through? How about, if we then explore how the deliberate use and assemblage of such physical transparency strategies in architecture and design connects to – or could productively connect to – the institutional, political notions of transparency that we are concerned with in our area of institutional or political transparency? Or put more simply and zooming in on one core aspect of the conversation: what have the arrival of glass and its siblings done for democracy and what can we still hope they will do for open, transparent governance now and in the future?

This paper embarks upon this exploratory journey in four steps. It starts out (section 2.1) by revisiting the historic relationship between architecture, design and the build environment on the one side and institutional ambitions for democracy, openness, transparency and collective governance on the other side. Quite surprisingly it finds a very close and ancient relationship between the two. Physical and political transparency have through the centuries been joined at the hip and this relationship – overlooked as it is typically is – has persisted in very important ways in our contemporary institutions of governance. As a second step I seek to trace the major currents in the architectural debate and practice on transparency over the last century and ask three principal questions:

– How have architects as the master-designers of the built environment in theory, criticism and practice historically grappled with the concept of transparency? To what extent have they linked material notions and building strategies of transparency to political and social notions of transparency as tools for emancipation and empowerment? (section 2.2.)

– What is the status of transparency in architecture today and what is the degree of cross-fertilisation between physical and institutional/political transparency? (section 3)

– Where could a closer connect between material and political transparency lead us in terms of inspiring fresh experimentation and action in order to broaden the scope of available transparency tools and spawn fresh ideas and innovation? (section 4).

Along the way I will scan the fragmented empirical evidence base for the actual impact of physical transparency strategies and also flag interesting areas for future research. As it turns out, an obsession with material transparency in architecture and the built environment has evolved in parallel and in many ways predates the rising popularity of transparency in political science and governance studies. There are surprising parallels in the hype-and-skepticism curve, common challenges, interesting learning experiences and a rich repertoire of ideas for cross-fertilisation and joint ideation that is waiting to be tapped. However, this will require to find ways to bridge the current disconnect between the physical and institutional transparency professions and move beyond the current pessimism about an actual potential of physical transparency beyond empty gestures or deployment for surveillance, notions that seems to linger on both sides. But the analysis shows that this bridge-building could be an extremely worthwhile endeavor. Both the available empirical data, as well as the ideas that even just this first brief excursion into physical transparency has yielded bode well for embarking on this cross-disciplinary conversation about transparency. And as the essay also shows, help from three very unexpected corners might be on the way to re-ignite the spark for taking the physical dimension of transparency seriously again. Back to the roots has a bright future….(More)