Health Citizenship: A New Social Contract To Improve The Clinical Trial Process


Essay by Cynthia Grossman  and Tanisha Carino: “…We call this new social contract health citizenship, which includes a set of implied rights and responsibilities for all parties.

Three fundamental truths underpin our efforts:

  1. The path to better health and the advancement of science begin and end with engaged patients.
  2. The biomedical research enterprise lives all around us — in clinical trials, the data in our wearables, electronic health records, and data used for payment.
  3. The stakeholders that fuel advancement — clinicians, academia, government, the private sector, and investors — must create a system focused on speeding medical research and ensuring that patients have appropriate access to treatments.

To find tomorrow’s cures, treatments, and prevention measures, every aspect of society needs to get involved. Health citizenship recognizes that the future of innovative research and development depends on both patients and the formal healthcare system stepping up to the plate.

Moving Toward A Culture Of Transparency  

Increasing clinical trials registration and posting of research results are steps in the direction of transparency. Access to information about clinical trials — enrollment criteria, endpoints, locations, and results — is critical to empowering patients, their families, and primary care physicians. Also, transparency has a cascading impact on the cost and speed of scientific discovery, through ensuring validation and reproducibility of results…..

Encouraging Data Sharing

Data is the currency of biomedical research, and now patients are poised to contribute more of it than ever. In fact, many patients who participate in clinical research expect that their data will be shared and want to be partners, not just participants, in how data is used to advance the science and clinical practice that impact their disease or condition.

Engaging more patients in data sharing is only one part of what is needed to advance a data-sharing ecosystem. The National Academies of Science, Engineering, and Medicine (formerly the Institute of Medicine) conducted a consensus study that details the challenges to clinical trial data sharing. Out of that study spun a new data-sharing platform, Vivli, which will publicly launch this year. The New England Journal of Medicine took an important step toward demonstrating the value of sharing clinical trial data through its SPRINT Data Challenge, where it opened up a data set and supported projects that sought to derive new insights from the existing data. Examples like these will go a long way toward demonstrating the value of data sharing to advancing science, academic careers, and, most importantly, patient health.

As the technology to share clinical trial data improves, it will become less of an impediment than aligning incentives. The academic environment incentivizes researchers through first author and top-tier journal publications, which contribute to investigators holding on to clinical trial data. A recent publication suggests a way to ensure academic credit, through publication credit, for sharing data sets and allows investigators to tag data sets with unique IDs.

While this effort could assist in incentivizing data sharing, we see the value of tagging data sets as a way to rapidly gather examples of the value of data sharing, including what types of data sets are taken up for analysis and what types of analyses or actions are most valuable. This type of information is currently missing, and, without the value proposition, it is difficult to encourage data sharing behavior.

The value of clinical trial data will need to be collectively reexamined through embracing the sharing of data both across clinical trials and combined with other types of data. Similar to the airline and car manufacturing industries sharing data in support of public safety,7 as more evidence is gathered to support the impact of clinical trial data sharing and as the technology is developed to do this safely and securely, the incentives, resources, and equity issues will need to be addressed collectively…(More)”.

Crowdsourcing & Data Analytics: The New Settlement Tools


Paper by Chao, Bernard and Robertson, Christopher T. and Yokum, David V: “In the jury trial rights, the State and Federal Constitutions recognize the fundamental value of having laypersons resolve civil and criminal disputes. Nonetheless, settlement allows parties to avoid the risks and cost of trials, and settlements help clear court dockets efficiently. But achieving settlement can be a challenge. Parties naturally view their cases from different perspectives, and these perspectives often cause both sides to be overly optimistic. This article describes a novel method of providing parties more accurate information about the value of their case by incorporating layperson perspectives. Specifically, we suggest that working with mediators or settlement judges, the parties should create mini-trials and then recruit hundreds of online mock jurors to render decisions. By applying modern statistical techniques to these results, the mediators can show the parties the likelihood of possible outcomes and also collect qualitative information about strengths and weaknesses for each side. These data will counter the parties’ unrealistic views and thereby facilitate settlement….(More)”.

Community Academic Research Partnership in Digital Contexts: Opportunities, Limitations, and New Ways to Promote Mutual Benefit


Report by Liat Racin and Eric Gordon: “It’s widely accepted that community-academic collaborations have the potential to involve more of the people and places that a community values as well as address the concerns of the very constituents that community-based organizations care for. Just how to involve them and ensure their benefit remains highly controversial in the digital age. This report provides an overview of the concerns, values, and the roles of digital data and communications in community-academic research partnerships from the perspectives of Community Partner Organizations (CPOs) in Boston, Massachusetts. It can serve as a resource for researchers and academic organizations seeking to better understand the position and sentiments of their community partners, and ways in which to utilize digital technology to address conflicting notions on what defines ‘good’ research as well as the power imbalances that may exist between all involved participants. As research involves community members and agencies more closely, it’s commonly assumed that the likelihood of CPOs accepting and endorsing a projects’ or programs’ outcomes increases if they perceive that the research itself is credible and has direct beneficial application.

Our research is informed by informal discussions with participants of events and workshops organized by both the Boston Civic Media Consortium and the Engagement Lab at Emerson College between 2015-2016. These events are free to the public and were attended by both CPOs and academics from various fields and interest positions. We also conducted interviews with 20 CPO representatives in the Greater Boston region who were currently or had recently engaged in academic research partnerships. These representatives presented a diverse mix of experiences and were not disproportionately associated with any one community issue. The interview protocol consisted of 15 questions that explored issues related to the benefits, challenges, structure and outcomes of their academic collaborations. It also included questions about the nature and processes of data management. Our goal was to uncover patterns of belief in the roles, values, and concerns of CPO representatives in partnerships, focusing on how they understand and assign value to digital data and technology.

Unfortunately, the growing use and dependence on digital tools and technology in our modern-day research context has failed to inspire in-depth analysis on the influences of ‘the digital’ in community-engaged social research, such as how data is produced, used, and disseminated by community members and agencies. This gap exists despite the growing proliferation of digital technologies and born-digital data in the work of both social researchers and community groups (Wright, 2005; Thompson et al., 2003; Walther and Boyd 2002). To address this gap and identify the discourses about what defines ‘good’ research processes, we ask: “To what extent do community-academic partnerships meet the expectations of community groups?” And, “what are the main challenges of CPO representatives when they collaboratively generate and exchange knowledge with particular regard to the design, access and (re)use of digital data?”…(More)”.

A Framework for Strengthening Data Ecosystems to Serve Humanitarian Purposes


Paper by Marc van den Homberg et al: “The incidence of natural disasters worldwide is increasing. As a result, a growing number of people is in need of humanitarian support, for which limited resources are available. This requires an effective and efficient prioritization of the most vulnerable people in the preparedness phase, and the most affected people in the response phase of humanitarian action. Data-driven models have the potential to support this prioritization process. However, the applications of these models in a country requires a certain level of data preparedness.
To achieve this level of data preparedness on a large scale we need to know how to facilitate, stimulate and coordinate data-sharing between humanitarian actors. We use a data ecosystem perspective to develop success criteria for establishing a “humanitarian data ecosystem”. We first present the development of a general framework with data ecosystem governance success criteria based on a systematic literature review. Subsequently, the applicability of this framework in the humanitarian sector is assessed through a case study on the “Community Risk Assessment and Prioritization toolbox” developed by the Netherlands Red Cross. The empirical evidence led to the adaption the framework to the specific criteria that need to be addressed when aiming to establish a successful humanitarian data ecosystem….(More)”.

Data sharing in PLOS ONE: An analysis of Data Availability Statements


Lisa M. Federer et al at PLOS One: “A number of publishers and funders, including PLOS, have recently adopted policies requiring researchers to share the data underlying their results and publications. Such policies help increase the reproducibility of the published literature, as well as make a larger body of data available for reuse and re-analysis. In this study, we evaluate the extent to which authors have complied with this policy by analyzing Data Availability Statements from 47,593 papers published in PLOS ONE between March 2014 (when the policy went into effect) and May 2016. Our analysis shows that compliance with the policy has increased, with a significant decline over time in papers that did not include a Data Availability Statement. However, only about 20% of statements indicate that data are deposited in a repository, which the PLOS policy states is the preferred method. More commonly, authors state that their data are in the paper itself or in the supplemental information, though it is unclear whether these data meet the level of sharing required in the PLOS policy. These findings suggest that additional review of Data Availability Statements or more stringent policies may be needed to increase data sharing….(More)”.

Data Activism and Social Change


Book by Miren Gutiérrez: “This book efficiently contributes to our understanding of the interplay between data, technology and communicative practice on the one hand, and democratic participation on the other. It addresses the emergence of proactive data activism, a new sociotechnical phenomenon in the field of action that arises as a reaction to massive datafication, and makes affirmative use of data for advocacy and social change.
By blending empirical observation and in-depth qualitative interviews, Gutiérrez brings to the fore a debate about the social uses of the data infrastructure and examines precisely how people employ it, in combination with other technologies, to collaborate and act for social change….(More)”.

Creating a Machine Learning Commons for Global Development


Blog by Hamed Alemohammad: “Advances in sensor technology, cloud computing, and machine learning (ML) continue to converge to accelerate innovation in the field of remote sensing. However, fundamental tools and technologies still need to be developed to drive further breakthroughs and to ensure that the Global Development Community (GDC) reaps the same benefits that the commercial marketplace is experiencing. This process requires us to take a collaborative approach.

Data collaborative innovation — that is, a group of actors from different data domains working together toward common goals — might hold the key to finding solutions for some of the global challenges that the world faces. That is why Radiant.Earth is investing in new technologies such as Cloud Optimized GeoTiffsSpatial Temporal Asset Catalogues (STAC), and ML. Our approach to advance ML for global development begins with creating open libraries of labeled images and algorithms. This initiative and others require — and, in fact, will thrive as a result of — using a data collaborative approach.

“Data is only as valuable as the decisions it enables.”

This quote by Ion Stoica, professor of computer science at the University of California, Berkeley, may best describe the challenge facing those of us who work with geospatial information:

How can we extract greater insights and value from the unending tsunami of data that is before us, allowing for more informed and timely decision making?…(More).

Optimal Scope for Free Flow of Non-Personal Data in Europe


Paper by Simon Forge for the European Parliament Think Tank: “Data is not static in a personal/non-personal classification – with modern analytic methods, certain non-personal data can help to generate personal data – so the distinction may become blurred. Thus, de-anonymisation techniques with advances in artificial intelligence (AI) and manipulation of large datasets will become a major issue. In some new applications, such as smart cities and connected cars, the enormous volumes of data gathered may be used for personal information as well as for non-personal functions, so such data may cross over from the technical and non-personal into the personal domain. A debate is taking place on whether current EU restrictions on confidentiality of personal private information should be relaxed so as to include personal information in free and open data flows. However, it is unlikely that a loosening of such rules will be positive for the growth of open data. Public distrust of open data flows may be exacerbated because of fears of potential commercial misuse of such data, as well of leakages, cyberattacks, and so on. The proposed recommendations are: to promote the use of open data licences to build trust and openness, promote sharing of private enterprises’ data within vertical sectors and across sectors to increase the volume of open data through incentive programmes, support testing for contamination of open data mixed with personal data to ensure open data is scrubbed clean – and so reinforce public confidence, ensure anti-competitive behaviour does not compromise the open data initiative….(More)”.

Redefining ‘impact’ so research can help real people right away, even before becoming a journal article


Perhaps nowhere is impact of greater importance than in my own fields of ecology and conservation science. Researchers often conduct this work with the explicit goal of contributing to the restoration and long-term survival of the species or ecosystem in question. For instance, research on an endangered plant can help to address the threats facing it.

But scientific impact is a very tricky concept. Science is a process of inquiry; it’s often impossible to know what the outcomes will be at the start. Researchers are asked to imagine potential impacts of their work. And people who live and work in the places where the research is conducted may have different ideas about what impact means.

In collaboration with several Bolivian colleagues, I studied perceptions of research and its impact in a highly biodiverse area in the Bolivian Amazon. We found that researchers – both foreign-based and Bolivian – and people living and working in the area had different hopes and expectations about what ecological research could help them accomplish…

Eighty-three percent of researchers queried told us their work had implications for management at community, regional and national levels rather than at the international level. For example, knowing the approximate populations of local primate species can be important for communities who rely on the animals for food and ecotourism.

But the scale of relevance didn’t necessarily dictate how researchers actually disseminated the results of their work. Rather, we found that the strongest predictor of how and with whom a researcher shared their work was whether they were based at a foreign or national institution. Foreign-based researchers had extremely low levels of local, regional or even national dissemination. However, they were more likely than national researchers to publish their findings in the international literature….

Rather than impact being addressed at the end of research, societal impacts can be part of the first stages of a study. For example, people living in the region where data is to be collected might have insight into the research questions being investigated; scientists need to build in time and plan ways to ask them. Ecological fieldwork presents many opportunities for knowledge exchange, new ideas and even friendships between different groups. Researchers can take steps to engage more directly with community life, such as by taking a few hours to teach local school kids about their research….(More)”.

The world’s first neighbourhood built “from the internet up”


The Economist: “Quayside, an area of flood-prone land stretching for 12 acres (4.8 hectares) on Toronto’s eastern waterfront, is home to a vast, pothole-filled parking lot, low-slung buildings and huge soyabean silos—a crumbling vestige of the area’s bygone days as an industrial port. Many consider it an eyesore but for Sidewalk Labs, an “urban innovation” subsidiary of Google’s parent company, Alphabet, it is an ideal location for the world’s “first neighbourhood built from the internet up”.

Sidewalk Labs is working in partnership with Waterfront Toronto, an agency representing the federal, provincial and municipal governments that is responsible for developing the area, on a $50m project to overhaul Quayside. It aims to make it a “platform” for testing how emerging technologies might ameliorate urban problems such as pollution, traffic jams and a lack of affordable housing. Its innovations could be rolled out across an 800-acre expanse of the waterfront—an area as large as Venice.

Sidewalk Labs is planning pilot projects across Toronto this summer to test some of the technologies it hopes to employ at Quayside; this is partly to reassure residents. If its detailed plan is approved later this year (by Waterfront Toronto and also by various city authorities), it could start work at Quayside in 2020.

That proposal contains ideas ranging from the familiar to the revolutionary. There will be robots delivering packages and hauling away rubbish via underground tunnels; a thermal energy grid that does not rely on fossil fuels; modular buildings that can shift from residential to retail use; adaptive traffic lights; and snow-melting sidewalks. Private cars are banned; a fleet of self-driving shuttles and robotaxis would roam freely. Google’s Canadian headquarters would relocate there.

Undergirding Quayside would be a “digital layer” with sensors tracking, monitoring and capturing everything from how park benches are used to levels of noise to water use by lavatories. Sidewalk Labs says that collecting, aggregating and analysing such volumes of data will make Quayside efficient, liveable and sustainable. Data would also be fed into a public platform through which residents could, for example, allow maintenance staff into their homes while they are at work.

Similar “smart city” projects, such as Masdar in the United Arab Emirates or South Korea’s Songdo, have spawned lots of hype but are not seen as big successes. Many experience delays because of shifting political and financial winds, or because those overseeing their construction fail to engage locals in the design of communities, says Deland Chan, an expert on smart cities at Stanford University. Dan Doctoroff, the head of Sidewalk Labs, who was deputy to Michael Bloomberg when the latter was mayor of New York City, says that most projects flop because they fail to cross what he terms “the urbanist-technologist divide”.

That divide, between tech types and city-planning specialists, will also need to be bridged before Sidewalk Labs can stick a shovel in the soggy ground at Quayside. Critics of the project worry that in a quest to become a global tech hub, Toronto’s politicians may give it too much freedom. Sidewalk Labs’s proposal notes that the project needs “substantial forbearances from existing [city] laws and regulations”….(More)”.