Nudging patients into clinical trials


Bradley J. Fikes in the San Diego Union Tribune: “The modern era’s dramatic advances in medical care relies on more than scientists, doctors and biomedical companies. None of it could come to fruition without patients willing to risk trying experimental therapies to see if they are safe and effective.

More than 220,000 clinical trials are taking place worldwide, with more than 81,000 of them in the United States, according to the federal government’s registry, clinicaltrials.gov. That poses a huge challenge for recruitment.

Companies are offering a variety of inducements to coax patients into taking part. Some rely on that good old standby, cash. Others remove obstacles. Axovant Sciences, which is preparing to test an Alzheimer’s drug, is offering patients transportation from the ridesharing service Lyft.

In addition, non-cash rewards such as iPads, opt-out enrollment in low-risk trials or even guaranteeing patients they will be informed about the clinical trial results should be considered, say a group of researchers who suggest testing these incentives scientifically.

In an article published Wednesday in Science Translational Medicine, the researchers present a matrix of these options, their benefits, and potential drawbacks. They urge companies to track the outcomes of these incentives to find out what works best.

The goal, the article states, is to “nudge” patients into participating, but not so far as to turn the nudge into a coercive shove. Go to j.mp/nudgeclin for the article.

For a nudge, the researchers suggest the wording of a consent form could include options such as a choice of preferred appointment times, such as “Yes, morning appointments,” with a number of similarly worded statements. That wording would “imply that enrollment is normative,” or customary, the article stated.

Researchers could go so far as to vary the offers to patients in a single clinical trial and measure which incentives produce the best responses, said Eric M. Van Epps, one of the researchers. In effect, that would provide a clinical trial of clinical trial incentives.

As part of that tracking, companies need to gain insight into why some patients are reluctant to take part, and those reasons vary, said Van Epps, of the Michael J. Crescenz Veterans Affairs Medical Center in Philadelphia.

“Sometimes they’re not made aware of the clinical trials, they might not understand how clinical trials work, they might want more control over their medication regimen or how they’re going to proceed,” Van Epps said.

At other times, patients may be overwhelmed by the volume of paperwork required. Some paperwork is necessary for legal and ethical reasons. Patients must be informed about the trial’s purpose, how it might help them, and what harm might happen. However, it could be possible to simplify the informed consent paperwork to make it more understandable and less intimidating….(More)”

Fine-grained dengue forecasting using telephone triage services


Nabeel Abdur Rehman et al at Science Advances: “Thousands of lives are lost every year in developing countries for failing to detect epidemics early because of the lack of real-time disease surveillance data. We present results from a large-scale deployment of a telephone triage service as a basis for dengue forecasting in Pakistan. Our system uses statistical analysis of dengue-related phone calls to accurately forecast suspected dengue cases 2 to 3 weeks ahead of time at a subcity level (correlation of up to 0.93). Our system has been operational at scale in Pakistan for the past 3 years and has received more than 300,000 phone calls. The predictions from our system are widely disseminated to public health officials and form a critical part of active government strategies for dengue containment. Our work is the first to demonstrate, with significant empirical evidence, that an accurate, location-specific disease forecasting system can be built using analysis of call volume data from a public health hotline….(More)”

Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction


New book by Arvind Narayanan, Joseph Bonneau, Edward Felten, Andrew Miller & Steven Goldfeder: “Bitcoin and Cryptocurrency Technologies provides a comprehensive introduction to the revolutionary yet often misunderstood new technologies of digital currency. Whether you are a student, software developer, tech entrepreneur, or researcher in computer science, this authoritative and self-contained book tells you everything you need to know about the new global money for the Internet age.

How do Bitcoin and its block chain actually work? How secure are your bitcoins? How anonymous are their users? Can cryptocurrencies be regulated? These are some of the many questions this book answers. It begins by tracing the history and development of Bitcoin and cryptocurrencies, and then gives the conceptual and practical foundations you need to engineer secure software that interacts with the Bitcoin network as well as to integrate ideas from Bitcoin into your own projects. Topics include decentralization, mining, the politics of Bitcoin, altcoins and the cryptocurrency ecosystem, the future of Bitcoin, and more.

  • An essential introduction to the new technologies of digital currency
  • Covers the history and mechanics of Bitcoin and the block chain, security, decentralization, anonymity, politics and regulation, altcoins, and much more
  • Features an accompanying website that includes instructional videos for each chapter, homework problems, programming assignments, and lecture slides…(More)”.

See also: Coursera course

The Future of Economics Uses the Science of Real-Life Social Networks


Paul Ormerod at Evonomics: “….The recognition of the fundamental importance of networks for outcomes in the modern social and economic worlds does not mean that governments are powerless. Instead it calls for smarter government rather than no government. It almost certainly means fewer state bureaucrats, working in an outdated intellectual framework, searching for the elusive silver bullet which is guaranteed to solve a problem.

The silver bullet of this approach is that there are no silver bullets. Instead, we need to rely much more on the processes of experimentation and discovery. A key influence on behaviour in many social and economic contexts is the prevailing social norm in the relevant network, which emerges from the interactions of the individuals who comprise the network. But there are no levers, no magic buttons to press, which will guarantee that social norms can be altered in ways which the policymaker desires. We can only discover what works by experiment.

This does not mean that we are operating in the dark, that the success or otherwise of a policy is merely a matter of chance. The more knowledge we have of how people are connected on the relevant network, of who might influence whom and when, the more chance a policy has of succeeding. Much of this knowledge is held at decentralised levels in tacit form, a form which is hard or even impossible to codify. But it is crucial to how most social and economic systems work in practice.

Our current political institutions are to a large extent based on the vision of society and the economy operating like machines, populated by economically rational agents. This view of the world leads to centralised bureaucracies and centralised decision-making. We live in a society where decisions are made through several layers of bureaucracy, in both the public and private sectors. On the whole, this leads to decisions that are insensitive to local (micro) conditions, and which are insensitive to society as it changes.

A lack of both resilience and robustness is a characteristic feature of such approaches to social and economic management. Structures, rules, regulations, incentives are put in place in the belief that a desired outcome can be achieved, that a potential crisis can be predicted and forestalled by such policies. As the recent financial crisis illustrates only too well, this view of the world is ill-suited to creating systems which are resilient when unexpected shocks occur, and which exhibit robustness in their ability to recover from the shock. The focus of policy needs to shift away from prediction and control. We can never predict the unpredictable. Instead, we need systems which exhibit resilience and robustness together with the ability to adapt and respond well to unpredictable future events….(More) Adapted from Complex New World: Translating new economic thinking into public policy, published by the Institute for Public Policy Research (IPPR).

Big health data: the need to earn public trust


Tjeerd-Pieter van Staa et al in the BMJ: “Better use of large scale health data has the potential to benefit patient care, public health, and research. The handling of such data, however, raises concerns about patient privacy, even when the risks of disclosure are extremely small.

The problems are illustrated by recent English initiatives trying to aggregate and improve the accessibility of routinely collected healthcare and related records, sometimes loosely referred to as “big data.” One such initiative, care.data, was set to link and provide access to health and social care information from different settings, including primary care, to facilitate the planning and provision of healthcare and to advance health science.1 Data were to be extracted from all primary care practices in England. A related initiative, the Clinical Practice Research Datalink (CPRD), evolved from the General Practice Research Database (GPRD). CPRD was intended to build on GPRD by linking patients’ primary care records to hospital data, around 50 disease registries and clinical audits, genetic information from UK Biobank, and even the loyalty cards of a large supermarket chain, creating an integrated data repository and linked services for all of England that could be sold to universities, drug companies, and non-healthcare industries. Care.data has now been abandoned and CPRD has stalled. The flawed implementation of care.data plus earlier examples of data mismanagement have made privacy issues a mainstream public concern. We look at what went wrong and how future initiatives might gain public support….(More)”

Crowdsourcing biomedical research: leveraging communities as innovation engines


Julio Saez-Rodriguez et al in Nature: “The generation of large-scale biomedical data is creating unprecedented opportunities for basic and translational science. Typically, the data producers perform initial analyses, but it is very likely that the most informative methods may reside with other groups. Crowdsourcing the analysis of complex and massive data has emerged as a framework to find robust methodologies. When the crowdsourcing is done in the form of collaborative scientific competitions, known as Challenges, the validation of the methods is inherently addressed. Challenges also encourage open innovation, create collaborative communities to solve diverse and important biomedical problems, and foster the creation and dissemination of well-curated data repositories….(More)”

Understanding Institutions: The Science and Philosophy of Living Together


New book by Francesco Guala: “Understanding Institutions proposes a new unified theory of social institutions that combines the best insights of philosophers and social scientists who have written on this topic. Francesco Guala presents a theory that combines the features of three influential views of institutions: as equilibria of strategic games, as regulative rules, and as constitutive rules.

Guala explains key institutions like money, private property, and marriage, and develops a much-needed unification of equilibrium- and rules-based approaches. Although he uses game theory concepts, the theory is presented in a simple, clear style that is accessible to a wide audience of scholars working in different fields. Outlining and discussing various implications of the unified theory, Guala addresses venerable issues such as reflexivity, realism, Verstehen, and fallibilism in the social sciences. He also critically analyses the theory of “looping effects” and “interactive kinds” defended by Ian Hacking, and asks whether it is possible to draw a demarcation between social and natural science using the criteria of causal and ontological dependence. Focusing on current debates about the definition of marriage, Guala shows how these abstract philosophical issues have important practical and political consequences.

Moving beyond specific cases to general models and principles, Understanding Institutions offers new perspectives on what institutions are, how they work, and what they can do for us….(More)”

What Governments Can Learn From Airbnb And the Sharing Economy


 in Fortune: “….Despite some regulators’ fears, the sharing economy may not result in the decline of regulation but rather in its opposite, providing a basis upon which society can develop more rational, ethical, and participatory models of regulation. But what regulation looks like, as well as who actually creates and enforce the regulation, is also bound to change.

There are three emerging models – peer regulation, self-regulatory organizations, and data-driven delegation – that promise a regulatory future for the sharing economy best aligned with society’s interests. In the adapted book excerpt that follows, I explain how the third of these approaches, of delegating enforcement of regulations to companies that store critical data on consumers, can help mitigate some of the biases Airbnb guests may face, and why this is a superior alternative to the “open data” approach of transferring consumer information to cities and state regulators.

Consider a different problem — of collecting hotel occupancy taxes from hundreds of thousands of Airbnb hosts rather than from a handful of corporate hotel chains. The delegation of tax collection to Airbnb, something a growing number of cities are experimenting with, has a number of advantages. It is likely to yield higher tax revenues and greater compliance than a system where hosts are required to register directly with the government, which is something occasional hosts seem reluctant to do. It also sidesteps privacy concerns resulting from mandates that digital platforms like Airbnb turn over detailed user data to the government. There is also significant opportunity for the platform to build credibility as it starts to take on quasi governmental roles like this.

There is yet another advantage, and the one I believe will be the most significant in the long-run. It asks a platform to leverage its data to ensure compliance with a set of laws in a manner geared towards delegating responsibility to the platform. You might say that the task in question here — computing tax owed, collecting, and remitting it—is technologically trivial. True. But I like this structure because of the potential it represents. It could be a precursor for much more exciting delegated possibilities.

For a couple of decades now, companies of different kinds have been mining the large sets of “data trails” customers provide through their digital interactions. This generates insights of business and social importance. One such effort we are all familiar with is credit card fraud detection. When an unusual pattern of activity is detected, you get a call from your bank’s security team. Sometimes your card is blocked temporarily. The enthusiasm of these digital security systems is sometimes a nuisance, but it stems from your credit card company using sophisticated machine learning techniques to identify patterns that prior experience has told it are associated with a stolen card. It saves billions of dollars in taxpayer and corporate funds by detecting and blocking fraudulent activity swiftly.

A more recent visible example of the power of mining large data sets of customer interaction came in 2008, when Google engineers announced that they could predict flu outbreaks using data collected from Google searches, and track the spread of flu outbreaks in real time, providing information that was well ahead of the information available using the Center for Disease Control’s (CDC) own tracking systems. The Google system’s performance deteriorated after a couple of years, but its impact on public perception of what might be possible using “big data” was immense.

It seems highly unlikely that such a system would have emerged if Google had been asked to hand over anonymized search data to the CDC. In fact, there would have probably been widespread public backlash to this on privacy grounds. Besides, the reason why this capability emerged organically from within Google is partly as a consequence of Google having one of the highest concentrations of computer science and machine learning talent in the world.

Similar approaches hold great promise as a regulatory approach for sharing economy platforms. Consider the issue of discriminatory practices. There has long been anecdotal evidence that some yellow cabs in New York discriminate against some nonwhite passengers. There have been similar concerns that such behavior may start to manifest on ridesharing platforms and in other peer-to-peer markets for accommodation and labor services.

For example, a 2014 study by Benjamin Edelman and Michael Luca of Harvard suggested that African American hosts might have lower pricing power than white hosts on Airbnb. While the study did not conclusively establish that the difference is due to guests discriminating against African American hosts, a follow-up study suggested that guests with “distinctively African American names” were less likely to receive favorable responses for their requests to Airbnb hosts. This research raises a red flag about the need for vigilance as the lines between personal and professional blur.

One solution would be to apply machine-learning techniques to be able to identify patterns associated with discriminatory behavior. No doubt, many platforms are already using such systems….(More)”

There aren’t any rules on how social scientists use private data. Here’s why we need them.


 at SSRC: “The politics of social science access to data are shifting rapidly in the United States as in other developed countries. It used to be that states were the most important source of data on their citizens, economy, and society. States needed to collect and aggregate large amounts of information for their own purposes. They gathered this directly—e.g., through censuses of individuals and firms—and also constructed relevant indicators. Sometimes state agencies helped to fund social science projects in data gathering, such as the National Science Foundation’s funding of the American National Election Survey over decades. While scholars such as James Scott and John Brewer disagreed about the benefits of state data gathering, they recognized the state’s primary role.

In this world, the politics of access to data were often the politics of engaging with the state. Sometimes the state was reluctant to provide information, either for ethical reasons (e.g. the privacy of its citizens) or self-interest. However, democratic states did typically provide access to standard statistical series and the like, and where they did not, scholars could bring pressure to bear on them. This led to well-understood rules about the common availability of standard data for many research questions and built the foundations for standard academic practices. It was relatively easy for scholars to criticize each other’s work when they were drawing on common sources. This had costs—scholars tended to ask the kinds of questions that readily available data allowed them to ask—but also significant benefits. In particular, it made research more easily reproducible.

We are now moving to a very different world. On the one hand, open data initiatives in government are making more data available than in the past (albeit often without much in the way of background resources or documentation).The new universe of private data is reshaping social science research in some ways that are still poorly understood. On the other, for many research purposes, large firms such as Google or Facebook (or even Apple) have much better data than the government. The new universe of private data is reshaping social science research in some ways that are still poorly understood. Here are some of the issues that we need to think about:…(More)”

Democracy Does Not Cause Growth: The Importance of Endogeneity Arguments


IADB Working Paper by JEL Codes:”This article challenges recent findings that democracy has sizable effects on economic growth. As extensive political science research indicates that economic turmoil is responsible for causing or facilitating many democratic transitions, the paper focuses on this endogeneity concern. Using a worldwide survey of 165 country-specific democracy experts conducted for this study, the paper separates democratic transitions into those occurring for reasons related to economic turmoil, here called endogenous, and those grounded in reasons more exogenous to economic growth. The behavior of economic growth following these more exogenous democratizations strongly indicates that democracy does not cause growth. Consequently, the common positive association between democracy and economic growth is driven by endogenous democratization episodes (i.e., due to faulty identification)….(More)”