Who knew contracts could be so interesting?


 at Transparency International UK: “…Despite the UK Government’s lack of progress, it wouldn’t be completely unreasonable to ask “who actually publishes these things, anyway?” Well, back in 2011, when the UK Government committed to publish all new contracts and tenders over £10,000 in value, the Slovakian Government decided to publish more or less everything. Faced by mass protests over corruption in the public sector, their government committed to publishing almost all public sector contracts online (there are some exemptions). You can now browse through the details of a significant amount of government business via the country’s online portal (so long as you can read Slovak, of course).

Who actually reads these things?

According to research by Transparency International Slovakia, at least 11% of the Slovakian adult population have looked at a government contract since they were first published back in 2011. That’s around 480,000 people. Although some of these spent more time than others browsing through the documents in-depth, this is undeniably an astounding amount of people taking a vague interest in government procurement.

Why does this matter?

Before Slovakia opened-up its contracts there was widespread mistrust in public institutions and officials. According to Transparency International’s global Corruption Perceptions Index, which measures impressions of public sector corruption, Slovakia was ranked 66th out of 183 countries in 2011. By 2014 it had jumped 12 places – a record achievement – to 54th, which must in some part be due to the Government’s commitment to opening-up public contracts to greater scrutiny.

Since the contracts were published, there also seems to have been a spike in media reports on government tenders. This suggests there is greater scrutiny of public spending, which should hopefully translate into less wasted expenditure.

Elsewhere, proponents of open contracting have espoused other benefits, such as greater commitment by both parties to following the agreement and protecting against malign private interests. Similar projects inGeorgia have also turned clunky bureaucracies into efficient, data-savvy administrations. In short, there are quite a few reasons why more openness in public sector procurement is a good thing.

Despite these benefits, opponents cite a number of downsides, including the administrative costs of publishing contracts online and issues surrounding commercially sensitive information. However, TI Slovakia’s research suggests the former is minimal – and presumably preferable to rooting around through paper mountains every time a Freedom of Information (FOI) request is received about a contract – whilst the latter already has to be disclosed under the FOI Act except in particular circumstances…(More)”

Modernizing Informed Consent: Expanding the Boundaries of Materiality


Paper by Nadia N. Sawicki: “Informed consent law’s emphasis on the disclosure of purely medical information – such as diagnosis, prognosis, and the risks and benefits of various treatment alternatives – does not accurately reflect modern understandings of how patients make medical decisions. Existing common law disclosure duties fail to capture a variety of non-medical factors relevant to patients, including information about the physician’s personal characteristics; the cost of treatment; the social implications of various health care interventions; and the legal consequences associated with diagnosis and treatment. Although there is a wealth of literature analyzing the merits of such disclosures in a few narrow contexts, there is little broader discussion and no consensus about whether there the doctrine of informed consent should be expanded to include information that may be relevant to patients but falls outside the traditional scope of medical materiality. This article seeks to fill that gap.
I offer a normative argument for expanding the scope of informed consent disclosure to include non-medical information that is within the physician’s knowledge and expertise, where the information would be material to the reasonable patient and its disclosure does not violate public policy. This proposal would result in a set of disclosure requirements quite different from the ones set by modern common law and legislation. In many ways, the range of required disclosures may become broader, particularly with respect to physician-specific information about qualifications, health status, and financial conflicts of interests. However, some disclosures that are currently required by statute (or have been proposed by commentators) would fall outside the scope of informed consent – most notably, information about support resources available in the abortion context; about the social, ethical, and legal implications of treatment; and about health care costs….(More)”

Improving Crowdsourcing and Citizen Science as a Policy Mechanism for NASA


Paper by Balcom Brittany: “This article examines citizen science projects, defined as “a form of open collaboration where members of the public participate in the scientific process, including identifying research questions, collecting and analyzing the data, interpreting the results, and problem solving,” as an effective and innovative tool for National Aeronautics and Space Administration (NASA) science in line with the Obama Administration’s Open Government Directive. Citizen science projects allow volunteers with no technical training to participate in analysis of large sets of data that would otherwise constitute prohibitively tedious and lengthy work for research scientists. Zooniverse.com hosts a multitude of popular space-focused citizen science projects, many of which have been extraordinarily successful and have enabled new research publications and major discoveries. This article takes a multifaceted look at such projects by examining the benefits of citizen science, effective game design, and current desktop computer and mobile device usage trends. It offers suggestions of potential research topics to be studied with emerging technologies, policy considerations, and opportunities for outreach. This analysis includes an overview of other crowdsourced research methods such as distributed computing and contests. New research and data analysis of mobile phone usage, scientific curiosity, and political engagement among Zooniverse.com project participants has been conducted for this study…(More)”

A computational algorithm for fact-checking


Kurzweil News: “Computers can now do fact-checking for any body of knowledge, according to Indiana University network scientists, writing in an open-access paper published June 17 in PLoS ONE.

Using factual information from summary infoboxes from Wikipedia* as a source, they built a “knowledge graph” with 3 million concepts and 23 million links between them. A link between two concepts in the graph can be read as a simple factual statement, such as “Socrates is a person” or “Paris is the capital of France.”

In the first use of this method, IU scientists created a simple computational fact-checker that assigns “truth scores” to statements concerning history, geography and entertainment, as well as random statements drawn from the text of Wikipedia. In multiple experiments, the automated system consistently matched the assessment of human fact-checkers in terms of the humans’ certitude about the accuracy of these statements.

Dealing with misinformation and disinformation

In what the IU scientists describe as an “automatic game of trivia,” the team applied their algorithm to answer simple questions related to geography, history, and entertainment, including statements that matched states or nations with their capitals, presidents with their spouses, and Oscar-winning film directors with the movie for which they won the Best Picture awards. The majority of tests returned highly accurate truth scores.

Lastly, the scientists used the algorithm to fact-check excerpts from the main text of Wikipedia, which were previously labeled by human fact-checkers as true or false, and found a positive correlation between the truth scores produced by the algorithm and the answers provided by the fact-checkers.

Significantly, the IU team found their computational method could even assess the truthfulness of statements about information not directly contained in the infoboxes. For example, the fact that Steve Tesich — the Serbian-American screenwriter of the classic Hoosier film “Breaking Away” — graduated from IU, despite the information not being specifically addressed in the infobox about him.

Using multiple sources to improve accuracy and richness of data

“The measurement of the truthfulness of statements appears to rely strongly on indirect connections, or ‘paths,’ between concepts,” said Giovanni Luca Ciampaglia, a postdoctoral fellow at the Center for Complex Networks and Systems Research in the IU Bloomington School of Informatics and Computing, who led the study….

“These results are encouraging and exciting. We live in an age of information overload, including abundant misinformation, unsubstantiated rumors and conspiracy theories whose volume threatens to overwhelm journalists and the public. Our experiments point to methods to abstract the vital and complex human task of fact-checking into a network analysis problem, which is easy to solve computationally.”

Expanding the knowledge base

Although the experiments were conducted using Wikipedia, the IU team’s method does not assume any particular source of knowledge. The scientists aim to conduct additional experiments using knowledge graphs built from other sources of human knowledge, such as Freebase, the open-knowledge base built by Google, and note that multiple information sources could be used together to account for different belief systems….(More)”

Beating the news’ with EMBERS: Forecasting Civil Unrest using Open Source Indicators


Paper by Naren Ramakrishnan et al: “We describe the design, implementation, and evaluation of EMBERS, an automated, 24×7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the uptick and downtick of incidents during the June 2013 protests in Brazil. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings….(More)”

Government data does not mean data governance: Lessons learned from a public sector application audit


Paper by Nik ThompsonRavi Ravindran, and Salvatore Nicosia: “Public sector agencies routinely store large volumes of information about individuals in the community. The storage and analysis of this information benefits society, as it enables relevant agencies to make better informed decisions and to address the individual’s needs more appropriately. Members of the public often assume that the authorities are well equipped to handle personal data; however, due to implementation errors and lack of data governance, this is not always the case. This paper reports on an audit conducted in Western Australia, focusing on findings in the Police Firearms Management System and the Department of Health Information System. In the case of the Police, the audit revealed numerous data protection issues leading the auditors to report that they had no confidence in the accuracy of information on the number of people licensed to possess firearms or the number of licensed firearms. Similarly alarming conclusions were drawn in the Department of Health as auditors found that they could not determine which medical staff member was responsible for clinical data entries made. The paper describes how these issues often do not arise from existing business rules or the technology itself, but a lack of sound data governance. Finally, a discussion section presents key data governance principles and best practices that may guide practitioners involved in data management. These cases highlight the very real data management concerns, and the associated recommendations provide the context to spark further interest in the applied aspects of data protection….(More)”

 

Architecting Transparency: Back to the Roots – and Forward to the Future?


Paper by Dieter Zinnbauer: “Where to go next in research and practice on information disclosure and institutional transparency? Where to learn and draw inspiration from? How about if we go back to the roots and embrace an original, material notion of transparency as the quality of a substance or element to be see-through? How about, if we then explore how the deliberate use and assemblage of such physical transparency strategies in architecture and design connects to – or could productively connect to – the institutional, political notions of transparency that we are concerned with in our area of institutional or political transparency? Or put more simply and zooming in on one core aspect of the conversation: what have the arrival of glass and its siblings done for democracy and what can we still hope they will do for open, transparent governance now and in the future?

This paper embarks upon this exploratory journey in four steps. It starts out (section 2.1) by revisiting the historic relationship between architecture, design and the build environment on the one side and institutional ambitions for democracy, openness, transparency and collective governance on the other side. Quite surprisingly it finds a very close and ancient relationship between the two. Physical and political transparency have through the centuries been joined at the hip and this relationship – overlooked as it is typically is – has persisted in very important ways in our contemporary institutions of governance. As a second step I seek to trace the major currents in the architectural debate and practice on transparency over the last century and ask three principal questions:

– How have architects as the master-designers of the built environment in theory, criticism and practice historically grappled with the concept of transparency? To what extent have they linked material notions and building strategies of transparency to political and social notions of transparency as tools for emancipation and empowerment? (section 2.2.)

– What is the status of transparency in architecture today and what is the degree of cross-fertilisation between physical and institutional/political transparency? (section 3)

– Where could a closer connect between material and political transparency lead us in terms of inspiring fresh experimentation and action in order to broaden the scope of available transparency tools and spawn fresh ideas and innovation? (section 4).

Along the way I will scan the fragmented empirical evidence base for the actual impact of physical transparency strategies and also flag interesting areas for future research. As it turns out, an obsession with material transparency in architecture and the built environment has evolved in parallel and in many ways predates the rising popularity of transparency in political science and governance studies. There are surprising parallels in the hype-and-skepticism curve, common challenges, interesting learning experiences and a rich repertoire of ideas for cross-fertilisation and joint ideation that is waiting to be tapped. However, this will require to find ways to bridge the current disconnect between the physical and institutional transparency professions and move beyond the current pessimism about an actual potential of physical transparency beyond empty gestures or deployment for surveillance, notions that seems to linger on both sides. But the analysis shows that this bridge-building could be an extremely worthwhile endeavor. Both the available empirical data, as well as the ideas that even just this first brief excursion into physical transparency has yielded bode well for embarking on this cross-disciplinary conversation about transparency. And as the essay also shows, help from three very unexpected corners might be on the way to re-ignite the spark for taking the physical dimension of transparency seriously again. Back to the roots has a bright future….(More)

Waze and the Traffic Panopticon


 in the New Yorker: “In April, during his second annual State of the City address, Los Angeles Mayor Eric Garcetti announced a data-sharing agreement with Waze, the Google-owned, Israel-based navigation service. Waze is different from most navigation apps, including Google Maps, in that it relies heavily on real-time, user-generated data. Some of this data is produced actively—a driver or passenger sees a stalled vehicle, then uses a voice command or taps a stalled-vehicle icon on the app to alert others—while other data, such as the user’s location and average speed, is gathered passively, via smartphones. The agreement will see the city provide Waze with some of the active data it collects, alerting drivers to road closures, construction, and parades, among other things. From Waze, the city will get real-time data on traffic and road conditions. Garcetti said that the partnership would mean “less congestion, better routing, and a more livable L.A.” Di-Ann Eisnor, Waze’s head of growth, acknowledged to me that these kinds of deals can cause discomfort to the people working inside city government. “It’s exciting, but people inside are also fearful because it seems like too much work, or it seems so unknown,” she said.

Indeed, the deal promises to help the city improve some of its traffic and infrastructure systems (L.A. still uses paper to manage pothole patching, for example), but it also acknowledges Waze’s role in the complex new reality of urban traffic planning. Traditionally, traffic management has been a largely top-down process. In Los Angeles, it is coördinated in a bunker downtown, several stories below the sidewalk, where engineers stare at blinking lights representing traffic and live camera feeds of street intersections. L.A.’s sensor-and-algorithm-driven Automated Traffic Surveillance and Control System is already one of the world’s most sophisticated traffic-mitigation tools, but it can only do so much to manage the city’s eternally unsophisticated gridlock. Los Angeles appears to see its partnership with Waze as an important step toward improving the bridge between its subterranean panopticon and the rest of the city still further, much like other metropolises that have struck deals with Waze under the company’s Connected Cities program.
Among the early adopters is Rio de Janeiro, whose urban command center tracks everything from accidents to hyperlocal weather conditions, pulling data from thirty departments and private companies, including Waze. “In Rio,” Eisnor said, traffic managers “were able to change the garbage routes, figure out where to install cameras, and deploy traffic personnel” because of the program. She also pointed out that Connected Cities has helped municipal workers in Washington, D.C., patch potholes within forty-eight hours of their being identified on Waze. “We’re helping reframe city planning through not just space but space and time,” she said…..(More)

The privacy paradox: The privacy benefits of privacy threats


Paper by Benjamin Wittes and Jodie Liu: “In this paper, Wittes and Liu argue that how we balance the relative value of different forms of privacy is a function of how much we fear the potential audiences from whom we want to keep certain information secret.

Some basic principles these authors propose regarding the nature of privacy are as follows:

  1. Most new technologies often both enhance and diminish privacy depending on how it is used, who is using it, and what sorts of privacy that person values.
  2. Individual concern with privacy often will not involve privacy in the abstract, but rather vis à vis specific audiences – that is to say that the question of privacyfrom whom matters.
  3. At least some modern technologies that we commonly think of as privacy-eroding may in fact enhance privacy from the people in our immediate surroundings.

From Google searches to online shopping to Kindle readers, the privacy equation is seldom as simple as a trade of convenience for privacy. It is far more often a tradeoff among different types of privacy, Wittes and Liu suggest. In conclusion, the privacy debate does not pay much attention to aggregated consumer preferences as a metric against which to measure privacy, and the authors venture to suggest that it should….(More)”

A framework for Adoption of Challenges and Prizes in US Federal Agencies: A Study of Early Adopters


Thesis by Louis, Claudia (Syracuse University): “In recent years we have witnessed a shift in the innovation landscape of organizations from closed to more open models embracing solutions from the outside. Widespread use of the internet and web 2.0 technologies have made it easier for organizations to connect with their clients, service providers, and the public at large for more collaborative problem solving and innovation. Introduction of the Open Government initiative accompanied by the America Competes Reauthorization Act signaled an unprecedented commitment by the US Federal Government to stimulating more innovation and creativity in problem solving. The policy and legislation empowered agencies to open up their problem solving space beyond their regular pool of contractors in finding solutions to the nation’s most complex problems.

This is an exploratory study of the adoption of challenges as an organizational innovation in public sector organizations. The main objective is to understand and explain how, and under what conditions challenges are being used by federal agencies and departments as a tool to promote innovation. The organizational innovation literature provides the main theoretical foundation for this study, but does not directly address contextual aspects regarding the type of innovation and the type of organization. The guiding framework uses concepts drawn from three literature streams: organizational innovation, open innovation, and public sector innovation…. (More)”