High-performance medicine: the convergence of human and artificial intelligence


Eric Topol in Nature: “The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen….(More)”.

Crowdsourced mapping in crisis zones: collaboration, organisation and impact


Amelia Hunt and Doug Specht in the Journal of International Humanitarian Action:  “Crowdsourced mapping has become an integral part of humanitarian response, with high profile deployments of platforms following the Haiti and Nepal earthquakes, and the multiple projects initiated during the Ebola outbreak in North West Africa in 2014, being prominent examples. There have also been hundreds of deployments of crowdsourced mapping projects across the globe that did not have a high profile.

This paper, through an analysis of 51 mapping deployments between 2010 and 2016, complimented with expert interviews, seeks to explore the organisational structures that create the conditions for effective mapping actions, and the relationship between the commissioning body, often a non-governmental organisation (NGO) and the volunteers who regularly make up the team charged with producing the map.

The research suggests that there are three distinct areas that need to be improved in order to provide appropriate assistance through mapping in humanitarian crisis: regionalise, prepare and research. The paper concludes, based on the case studies, how each of these areas can be handled more effectively, concluding that failure to implement one area sufficiently can lead to overall project failure….(More)”

Political Selection and Bureaucratic Productivity


Paper by James P. Habyarimana et al: “Economic theory of public bureaucracies as complex organizations predicts that bureaucratic productivity can be shaped by the selection of different types of agents, beyond their incentives. This theory applies to the institutions of local government in the developing world, where nationally appointed bureaucrats and locally elected politicians together manage the implementation of public policies and the delivery of services. Yet, there is no evidence on whether (which) selection traits of these bureaucrats and politicians matter for the productivity of local bureaucracies.

This paper addresses the empirical gap by gathering rich data in an institutional context of district governments in Uganda, which is typical of the local state in poor countries. The paper measures traits such as the integrity, altruism, personality, and public service motivation of bureaucrats and politicians. It finds robust evidence that higher integrity among locally elected politicians is associated with substantively better delivery of public health services by district bureaucracies. Together with the theory, this evidence suggests that policy makers seeking to build local state capacity in poor countries should take political selection seriously….(More)”.

Societal costs and benefits of high-value open government data: a case study in the Netherlands


Paper by F.M. Welle Donker and B. van Loenen: “Much research has emphasised the benefits of open government data, and especially high-value data. The G8 Open Data Charter defines high-value data as data that improve democracy and encourage the innovative reuse of the particular data. Thus, governments worldwide invest resources to identify potential high-value datasets and to publish these data as open data. However, while the benefits of open data are well researched, the costs of publishing data as open data are less researched. This research examines the relationship between the costs of making data suitable for publication as (linked) open data and the societal benefits thereof. A case study of five high-value datasets was carried out in the Netherlands to provide a societal cost-benefit analysis of open high-value data. Different options were investigated, ranging from not publishing the dataset at all to publishing the dataset as linked open data.

In general, it can be concluded that the societal benefits of (linked) open data are higher than the costs. The case studies show that there are differences between the datasets. In many cases, costs for open data are an integral part of general data management costs and hardly lead to additional costs. In certain cases, however, the costs to anonymize /aggregate the data are high compared to the potential value of an open data version of the dataset. Although, for these datasets, this leads to a less favourable relationship between costs and benefits, the societal benefits would still be higher than without an open data version….(More)”.

Defining subnational open government: does local context influence policy and practice?


M. Chatwin, G. Arku and E. Cleave in Policy Sciences: “What is open government? The contemporary conceptualization of open government remains rooted in transparency and accountability, but it is embedded within the political economy of policy, where forces of globalization through supranational organizations strongly influence the creation and dispersion of policy across the globe. Recognizing the direct impact of subnational governments on residents, in 2016 the Open Government Partnership (OGP) launched the Subnational Pioneer’s Pilot Project with 15 participating government authorities globally. Each subnational participant submitted an action plan for opening their government information and processes in 2017. The uniformity of the OGP action plan provides a unique opportunity to assess the conception of open government at the subnational level globally. This paper uses a document analysis to examine how open government is conceptualized at the subnational level, including the salience of various components, and how local context can influence the development of action plans that are responsive to the realities of each participating jurisdiction. This paper assesses whether being a part of the political economy of policy homogenizes the action plans of 15 subnational governments or allows for local context to influence the design of commitments still aligned within a general theme….(More)”.

Agile research


Michael Twidale and Preben Hansen at First Monday: “Most of us struggle when starting a new research project, even if we have considerable prior experience. It is a new topic and we are unsure about what to do, how to do it and what it all means. We may not have reflected much on our research process. Furthermore the way that research is described in the literature can be rather disheartening. Those papers describe what seems to be a nice, clear, linear, logical, even inevitable progression through a series of stages. It seems like proper researchers carefully plan everything out in advance and then execute that plan. How very different from the mess, the bewilderment, the false starts, the dead ends, the reversions and changes that we make along the way. Are we just doing research wrong? If it feels like that to established researchers with decades of experience and a nice publication record, how much worse must it feel to a new researcher, such as a Ph.D. student? If they are lucky they may have a good mentoring experience, effectively serving an apprenticeship with a wise and nurturing adviser in a supportive group of fellow researchers. Even so, it can be all too easy to feel like an imposter who must be doing it all wrong because what you are doing is not at all like what you read about what others are doing.

In the light of these confusions, fears, doubts and mismatches with what you experience while doing research and what you think is the right and proper way as alluded to in all the papers you read, we want to explore ideas around a title, or at least a provocative metaphor of “agile research”. We want to ask the question: “how might we take the ideas, the methods and the underlying philosophy behind agile software development and explore how these might be applied in the context of doing research?” This paper is not about sharing a set of methods that we have developed but more about provoking a discussion about the issue: What might agile research be like? How might it work? When might it be useful? When might it be problematic? Is it worth trying? Are people doing it already?

We are not claiming that this idea is wholly new. Many people have been using small scale rapid iterative methods within the research process for a long time. Rather we think that it can be useful to consider all these and other possible methods in the light of the successful deployment of agile software development processes, and to contrast them with more conventional research processes that rely more on careful advance planning. That is not to say that the latter methods are bad, just that other methods that might be characterized as more agile can be useful in particular circumstances.

We believe that it is worth exploring this idea as a way of addressing the problems that arise in trying to do a new research project, especially where an exploratory approach is useful. This could be in a domain that is new to the researcher, or where the domain is new in some way, such as involving new use contexts, new ways of interacting, new technologies, novel technology combinations, or new appropriations of existing technologies. We suspect this may be especially useful in helping new researchers such as PhD students get a better understanding of the research process in a less daunting manner. This work builds on prior thinking about how agile may be applied in university teaching and administration (Twidale and Nichols, 2013)….(More)”.

The Paradox of Police Data


Stacy Wood in KULA: knowledge creation, dissemination, and preservation studies: “This paper considers the history and politics of ‘police data.’ Police data, I contend, is a category of endangered data reliant on voluntary and inconsistent reporting by law enforcement agencies; it is also inconsistently described and routinely housed in systems that were not designed with long-term strategies for data preservation, curation or management in mind. Moreover, whereas US law enforcement agencies have, for over a century, produced and published a great deal of data about crime, data about the ways in which police officers spend their time and make decisions about resources—as well as information about patterns of individual officer behavior, use of force, and in-custody deaths—is difficult to find. This presents a paradoxical situation wherein vast stores of extant data are completely inaccessible to the public. This paradoxical state is not new, but the continuation of a long history co-constituted by technologies, epistemologies and context….(More)”.

Efficacious and Ethical Public Paternalism


Daniel M. Hausman in the Review of Behavioral Economics (Special Issue on Behavioral Economics and New Paternalism): “People often make bad judgments. A big brother or sister who was wise, well-informed, and properly-motivated could often make better decisions for almost everyone. But can governments, which are not staffed with ideal big brothers or sisters, improve upon the mediocre decisions individuals make? If so, when and how? The risks of extending the reach of government into guiding individual lives must also be addressed. This essay addresses three questions concerning when paternalistic policies can be efficacious, efficient, and safe: 1. In what circumstances can policy makers be confident that they know better than individuals how individuals can best promote their own well-being? 2. What are the methods governments can use to lead people to make decision that are better for themselves? 3. What are the moral pluses and minuses of these methods? Answering these questions defines a domain in which paternalistic policy is an attractive option….(More)”.

Participatory Design for Innovation in Access to Justice


Margaret Hagan at Daedalus: “Most access-to-justice technologies are designed by lawyers and reflect lawyers’ perspectives on what people need. Most of these technologies do not fulfill their promise because the people they are designed to serve do not use them. Participatory design, which was developed in Scandinavia as a process for creating better software, brings end users and other stakeholders into the design process to help decide what problems need to be solved and how. Work at the Stanford Legal Design Lab highlights new insights about what tools can provide the assistance that people actually need, and about where and how they are likely to access and use those tools. These participatory design models lead to more effective innovation and greater community engagement with courts and the legal system.

A decade into the push for innovation in access to justice, most efforts reflect the interests and concerns of courts and lawyers rather than the needs of the people the innovations are supposed to serve. New legal technologies and services, whether aiming to help people expunge their criminal records or to get divorced in more cooperative ways, have not been adopted by the general public. Instead, it is primarily lawyers who use them.1

One way to increase the likelihood that innovations will serve clients would be to involve clients in designing them. Participatory design emerged in Scandinavia in the 1970s as a way to think more effectively about decision-making in the workplace.  It evolved into a strategy for developing software in which potential users were invited to help define a vision of a product, and it has since been widely used for changing systems like elementary education, hospital services, and smart cities, which use data and technology to improve sustainability and foster economic development.3

Participatory design’s promise is that “system innovation” is more likely to be effective in producing tools that the target group will use and in spending existing resources efficiently to do so. Courts spend an enormous amount of money on information technology every year. But the technology often fails to meet courts’ goals: barely half of the people affected are satisfied with courts’ customer service….(More)”.

A systematic review of the public administration literature to identify how to increase public engagement and participation with local governance


Paper by Josephine Gatti Schafer: “A systematic review of the public administration literature on public engagement and participation is conducted with the expressed intent to develop an actionable evidence base for public managers. Over 900 articles, in nine peer‐reviewed public administration journals are screened on the topic. The evidence from 40 articles is classified, summarized, and applied to inform the managerial practice of activating and recruiting the participation of the public in the affairs of local governance. The review also provides brief explanation on how systematic reviews can fill a need in governance from the evidence‐based management perspective….(More)”.