Re-Use Of Public Sector Open Data: Characterising The Phenomena


Paper by Josefin Lassinantti at the International Journal of Public Information Systems: “Despite the growing number of open data, re-use of this data is not reaching the expected levels and now this phenomenon seems hampered in its evolvement. Therefore, this study sets out to characterize the re-use of open data from public sector in order to increase our elaborate understanding of this practice, and does so by performing a literature review inspired by the processes for defining concepts, and contextualized within the historical evolvement of European open data policies. Apart from the identification of three main research approaches towards open data re-use and an elaborated definition of re-use, the findings led to the creation of a framework enabling us to see open data re-use as an iterative value-creating process in two different contexts, the public task context and the non-public task context. This process builds on three categories of meta-activities for reuse practice: 1) gaining access to and understanding data, 2) handling and re-purposing the data, and 3) creating broader value of data, as well as indications of value for whom. Lastly, implications of this re-use process and framework was discussed, along with implications of an identified practice-policy mismatch that risk hampering the future evolvement of open data re-use….(More)”.

Microsoft’s Open Notre Dame initiative calls for sharing of open data in restoration effort


Hamza Jawad at Neowin: “On April 15, a disastrous fire ravaged the famous Notre-Dame cathedral in France. In the wake of the episode, tech companies, such as Apple, announced that they would be donating to help in rebuilding efforts. On the other hand, some companies, like Ubisoft, took a different approach to support the restorations that followed.

A few days ago, Microsoft and Iconem announced the “Open Notre Dame” initiative to contribute towards the restoration of the ‘Lady of Paris’. The open data project is said to help gather and analyze existing documents on the monument, while simultaneously producing and sharing its 3D models. Today, the company has once again detailed the workings of this initiative, along with a call for the sharing of open data to help quicken the restoration efforts….

GitHub will host temporal models of the building, which can then be easily shared to and accessed by various other initiatives in a concerted effort to maintain accuracy as much as possible. Many companies, including Ubisoft, have already provided data that will help form the foundation for these open source models. More details regarding the project can be obtained on the original blog post….(More)”.

Data Protection and Digital Agency for Refugees


Paper by Dragana Kaurin: “For the millions of refugees fleeing conflict and persecution every year, access to information about their rights and control over their personal data are crucial for their ability to assess risk and navigate the asylum process. While asylum seekers are required to provide significant amounts of personal information on their journey to safety, they are rarely fully informed of their data rights by UN agencies or local border control and law enforcement staff tasked with obtaining and processing their personal information. Despite recent improvements in data protection mechanisms in the European Union, refugees’ informed consent for the collection and use of their personal data is rarely sought. Using examples drawn from interviews with refugees who have arrived in Europe since 2013, and an analysis of the impacts of the 2016 EU-Turkey deal on migration, this paper analyzes how the vast amount of data collected from refugees is gathered, stored and shared today, and considers the additional risks this collection process poses to an already vulnerable population navigating a perilous information-decision gap….(More)”.

Crowdsourcing Research Questions? Leveraging the Crowd’s Experiential Knowledge for Problem Finding


Paper by Tiare-Maria Brasseur, Susanne Beck, Henry Sauermann, Marion Poetz: “Recently, both researchers and policy makers have become increasingly interested in involving the general public (i.e., the crowd) in the discovery of new science-based knowledge. There has been a boom of citizen science/crowd science projects (e.g., Foldit or Galaxy Zoo) and global policy aspirations for greater public engagement in science (e.g., Horizon Europe). At the same time, however, there are also criticisms or doubts about this approach. Science is complex and laypeople often do not have the appropriate knowledge base for scientific judgments, so they rely on specialized experts (i.e., scientists) (Scharrer, Rupieper, Stadtler & Bromme, 2017). Given these two perspectives, there is no consensus on what the crowd can do and what only researchers should do in scientific processes yet (Franzoni & Sauermann, 2014). Previous research demonstrates that crowds can be efficiently and effectively used in late stages of the scientific research process (i.e., data collection and analysis). We are interested in finding out what crowds can actually contribute to research processes that goes beyond data collection and analysis. Specifically, this paper aims at providing first empirical insights on how to leverage not only the sheer number of crowd contributors, but also their diversity in experience for early phases of the research process (i.e., problem finding). In an online and field experiment, we develop and test suitable mechanisms for facilitating the transfer of the crowd’s experience into scientific research questions. In doing so, we address the following two research questions: 1. What factors influence crowd contributors’ ability to generate research questions? 2. How do research questions generated by crowd members differ from research questions generated by scientists in terms of quality? There are strong claims about the significant potential of people with experiential knowledge, i.e., sticky problem knowledge derived from one’s own practical experience and practices (Collins & Evans, 2002), to enhance the novelty and relevance of scientific research (e.g., Pols, 2014). Previous evidence that crowds with experiential knowledge (e.g., users in Poetz & Schreier, 2012) or ?outsiders?/nonobvious individuals (Jeppesen & Lakhani, 2010) can outperform experts under certain conditions by having novel perspectives, support the assumption that the participation of non-scientists (i.e., crowd members) in scientific problem-finding might complement scientists’ lack of experiential knowledge. Furthermore, by bringing in exactly these new perspectives, they might help overcome problems of fixation/inflexibility in cognitive-search processes among scientists (Acar & van den Ende, 2016). Thus, crowd members with (higher levels of) experiential knowledge are expected to be superior in identifying very novel and out-of-the-box research problems with high practical relevance, as compared to scientists. However, there are clear reasons to be skeptical: despite their advantage to possess important experiential knowledge, the crowd lacks the scientific knowledge we assume to be required to formulate meaningful research questions. To study exactly how the transfer of crowd members’ experiential knowledge into science can be facilitated, we conducted two experimental studies in context of traumatology (i.e., research on accidental injuries). First, we conducted a large-scale online experiment (N=704) in collaboration with an international crowdsourcing platform to test the effect of two facilitating treatments on crowd members’ ability to formulate real research questions (study 1). We used a 2 (structuring knowledge/no structuring knowledge) x 2 (science knowledge/no science knowledge) between-subject experimental design. Second, we tested the same treatments in the field (study 2), i.e., in a crowdsourcing project in collaboration with LBG Open Innovation in Science Center. We invited patients, care takers and medical professionals (e.g., surgeons, physical therapists or nurses) concerned with accidental injuries to submit research questions using a customized online platform (https://tell-us.online/) to investigate the causal relationship between our treatments and different types and levels of experiential knowledge (N=118). An international jury of experts (i.e., journal editors in the field of traumatology) then assesses the quality of submitted questions (from the online and field experiment) along several quality dimensions (i.e., clarity, novelty, scientific impact, practical impact, feasibility) in an online evaluation process. To assess the net effect of our treatments, we further include a random sample of research questions obtained from early-stage research papers (i.e., conference papers) into the expert evaluation (blind to the source) and compare them with the baseline groups of our experiments. We are currently finalizing the data collection…(More)”.

News in a Digital Age – Comparing the Presentation of News Information over Time and Across Media Platform


Report by Rand Corporation: “Over the past 30 years, the way that Americans consume and share information has changed dramatically. People no longer wait for the morning paper or the evening news. Instead, equipped with smartphones or other digital devices, the average person spends hours each day online, looking at news or entertainment websites, using social media, and consuming many different types of information. Although some of the changes in the way news and information are disseminated can be quantified, far less is known about how the presentation of news—that is, the linguistic style, perspective, and word choice used when reporting on current events and issues—has changed over this period and how it differs across media platforms.

We aimed to begin to fill this knowledge gap by identifying and empirically measuring how the presentation of news by U.S. news sources has changed over time and how news presentation differs across media platforms….(More)”.

Open government in authoritarian regimes


Paper by Karl O’Connor, Colin Knox and Saltanat Janenova: “Open government has long been regarded as a pareto-efficient policy – after all, who could be against such compelling policy objectives as transparency, accountability, citizen engagement and integrity. This paper addresses why an authoritarian state would adopt a policy of open government, which seems counter-intuitive, and tracks its outworking by examining several facets of the policy in practice. The research uncovers evidence of insidious bureaucratic obstruction and an implementation deficit counter-posed with an outward-facing political agenda to gain international respectability. The result is ‘half-open’ government in which the more benign elements have been adopted but the vested interests of government and business elites remain largely unaffected….(More)”.

Humans and Big Data: New Hope? Harnessing the Power of Person-Centred Data Analytics


Paper by Carmel Martin, Keith Stockman and Joachim P. Sturmberg: “Big data provide the hope of major health innovation and improvement. However, there is a risk of precision medicine based on predictive biometrics and service metrics overwhelming anticipatory human centered sense-making, in the fuzzy emergence of personalized (big data) medicine. This is a pressing issue, given the paucity of individual sense-making data approaches. A human-centric model is described to address the gap in personal particulars and experiences in individual health journeys. The Patient Journey Record System (PaJR) was developed to improve human-centric healthcare by harnessing the power of person-centred data analytics using complexity theory, iterative health services and information systems applications over a 10 year period. PaJR is a web-based service supporting usually bi-weekly telephone calls by care guides to individuals at risk of readmissions.

This chapter describes a case study of the timing and context of readmissions using human (biopsychosocial) particular data which is based on individual experiences and perceptions with differing patterns of instability. This Australian study, called MonashWatch, is a service pilot using the PaJR system in the Dandenong Hospital urban catchment area of the Monash Health network. State public hospital big data – the Victorian HealthLinks Chronic Care algorithm provides case finding for high risk of readmission based on disease and service metrics. Monash Watch was actively monitoring 272 of 376 intervention patients, with 195 controls over 22 months (ongoing) at the time of the study.

Three randomly selected intervention cases describe a dynamic interplay of self-reported change in health and health care, medication, drug and alcohol use, social support structure. While the three cases were at similar predicted risk initially, their cases represented different statistically different time series configurations and admission patterns. Fluctuations in admission were associated with (mal)alignment of bodily health with psychosocial and environmental influences. However human interpretation was required to make sense of the patterns as presented by the multiple levels of data.

A human-centric model and framework for health journey monitoring illustrates the potential for ‘small’ personal experience data to inform clinical care in the era of big data predominantly based on biometrics and medical industrial process. ….(More)”.

Government support is a key factor for civic technology


Blog Post by Rebecca Rumbul: “Civic tech is on a huge growth curve. There is much more of it about now than there was ten years ago. At the same time, it is changing the scope and reach, and becoming much more mainstream. Ten years ago civic tech was hardly spoken about by anyone. It was largely the domain of ‘outsiders’, by which I mean campaigners and data specialists working outside the mainstream. Today civic tech is an accepted, respected and widely used form of engaging citizens.

The movement over that ten years has mostly been gradual, but over the last couple of years, there has been a really significant shift in how civic tech is viewed both by those within and outside the sector. A wider range of funders are more interested in supporting projects, government seems to have woken up to how civic tech can really be a spur to public engagement, and the word is getting out there to people on the street. Quite literally. At mySociety our FixMyStreet app now garners in the region of six thousand citizen reports of things like potholes and fly-tipping every week.

This maturing of attitudes towards and use of civic tech is wonderful to see. Those pioneers who saw a problem wrote a bit of code and put it online as a way of immediately finding a way to fix the problem have seen their often locally focused efforts contribute to the growth of a global phenomenon in a really short space of time.  And we are in a process here. There is no doubt that civic tech continues to grow and continues to make an impact way beyond its humble beginnings.

But the way civic tech develops is not uniform around the world, and it does need a number of circumstances to converge to make it really sing. That coming together of citizen awareness, government buy-in and funding support is crucial to its success. And there are other important factors too.

We’ve been researching the impact of civic tech around the world, and one of the most interesting things we’ve learned is that the movement is working with institutions much more today than it did five or ten years ago…(More)“.

Open data could have helped us learn from another mining dam disaster


Paulo A. de Souza Jr. at Nature: “The recent Brumadinho dam disaster in Brazil is an example of infrastructure failure with catastrophic consequences. Over 300 people were reported dead or missing, and nearly 400 more were rescued alive. The environmental impact is massive and difficult to quantify. The frequency of these disasters demonstrates that the current assets for monitoring integrity and generating alerting managers, authorities and the public to ongoing change in tailings are, in many cases, not working as they should. There is also the need for adequate prevention procedures. Monitoring can be perfect, but without timely and appropriate action, it will be useless. Good management therefore requires quality data. Undisputedly, management practices of industrial sites, including audit procedures, must improve, and data and metadata available from preceding accidents should be better used. There is a rich literature available about design, construction, operation, maintenance and decommissioning of tailing facilities. These include guidelines, standards, case studies, technical reports, consultancy and audit practices, and scientific papers. Regulation varies from country to country and in some cases, like Australia and Canada, it is controlled by individual state agencies. There are, however, few datasets available that are shared with the technical and scientific community more globally; particularly for prior incidents. Conspicuously lacking are comprehensive data related to monitoring of large infrastructures such as mining dams.

Today, Scientific Data published a Data Descriptor presenting a dataset obtained from 54 laboratory experiments on the breaching of fluvial dikes because of flow overtopping. (Re)use of such data can help improve our understanding of fundamental processes underpinning industrial infrastructure collapse (e.g., fluvial dike breaching, mining dam failure), and assess the accuracy of numerical models for the prediction of such incidents. This is absolutely essential for better management of floods, mitigation of dam collapses, and similar accidents. The authors propose a framework that could exemplify how data involving similar infrastructure can be stored, shared, published, and reused…(More)”.

Problematizing data-driven urban practices: Insights from five Dutch ‘smart cities’


Paper by Damion J.Bunders and KrisztinaVarró: Recently, the concept of the smart city has gained growing popularity. As cities worldwide have set the aim to harness digital technologies to their development, increasing focus came to lie on the potential challenges and concerns related to data-driven urban practices. In the existing literature, these challenges and concerns have been dominantly approached from a pragmatic approach based on the a priori assumed ‘goodness’ of the smart city; for a small group of critics, the very notion of the smart city is questionable. This paper takes the middle-way by interrogating how municipal and civil society stakeholders problematize the challenges and concerns related to data-driven practices in five Dutch cities, and how they act on these concerns in practice.

The lens of problematization posits that the ways of problematizing data-driven practices contribute to their actual enactment, and that this is an inherently political process. The case study shows that stakeholders do not only perceive practical challenges but are widely aware of and are (partly) pro-actively engaging with perceived normative-ethical and societal concerns, leading to different (sometimes inter-related) technological, legal/political, organizational, informative and participative strategies. Nonetheless, the explicit contestation of smart city policies through these strategies remains limited in scope. The paper argues that more research is needed to uncover the structural-institutional dynamics that facilitate and/or prevent the repoliticization of smart city projects….(More)”.