Democracy at Work: Moving Beyond Elections to Improve Well-Being


Michael Touchton, Natasha Borges Sugiyama and Brian Wampler in the American Political Science Review: “How does democracy work to improve well-being? In this article, we disentangle the component parts of democratic practice—elections, civic participation, expansion of social provisioning, local administrative capacity—to identify their relationship with well-being. We draw from the citizenship debates to argue that democratic practices allow citizens to gain access to a wide range of rights, which then serve as the foundation for improving social well-being. Our analysis of an original dataset covering over 5,550 Brazilian municipalities from 2006 to 2013 demonstrates that competitive elections alone do not explain variation in infant mortality rates, one outcome associated with well-being. We move beyond elections to show how participatory institutions, social programs, and local state capacity can interact to buttress one another and reduce infant mortality rates. It is important to note that these relationships are independent of local economic growth, which also influences infant mortality. The result of our thorough analysis offers a new understanding of how different aspects of democracy work together to improve a key feature of human development….(More)”.

Handbook of Big Data Technologies


Handbook by Albert Y. Zomaya and Sherif Sakr: “…offers comprehensive coverage of recent advancements in Big Data technologies and related paradigms.  Chapters are authored by international leading experts in the field, and have been reviewed and revised for maximum reader value. The volume consists of twenty-five chapters organized into four main parts. Part one covers the fundamental concepts of Big Data technologies including data curation mechanisms, data models, storage models, programming models and programming platforms. It also dives into the details of implementing Big SQL query engines and big stream processing systems.  Part Two focuses on the semantic aspects of Big Data management including data integration and exploratory ad hoc analysis in addition to structured querying and pattern matching techniques.  Part Three presents a comprehensive overview of large scale graph processing. It covers the most recent research in large scale graph processing platforms, introducing several scalable graph querying and mining mechanisms in domains such as social networks.  Part Four details novel applications that have been made possible by the rapid emergence of Big Data technologies such as Internet-of-Things (IOT), Cognitive Computing and SCADA Systems.  All parts of the book discuss open research problems, including potential opportunities, that have arisen from the rapid progress of Big Data technologies and the associated increasing requirements of application domains.
Designed for researchers, IT professionals and graduate students, this book is a timely contribution to the growing Big Data field. Big Data has been recognized as one of leading emerging technologies that will have a major contribution and impact on the various fields of science and varies aspect of the human society over the coming decades. Therefore, the content in this book will be an essential tool to help readers understand the development and future of the field….(More)”

Crowdsourcing Expertise


Simons Foundation: “Ever wish there was a quick, easy way to connect your research to the public?

By hosting a Wikipedia ‘edit-a-thon’ at a science conference, you can instantly share your research knowledge with millions while improving the science content on the most heavily trafficked and broadly accessible resource in the world. In 2016, in partnership with the Wiki Education Foundation, we helped launched the Wikipedia Year of Science, an ambitious initiative designed to better connect the work of scientists and students to the public. Here, we share some of what we learned.

The Simons Foundation — through its Science Sandbox initiative, dedicated to public engagement — co-hosted a series of Wikipedia edit-a-thons throughout 2016 at almost every major science conference, in collaboration with the world’s leading scientific societies and associations.

At our edit-a-thons, we leveraged the collective brainpower of scientists, giving them basic training on Wikipedia guidelines and facilitating marathon editing sessions — powered by free pizza, coffee and sometimes beer — during which they made copious contributions within their respective areas of expertise.

These efforts, combined with the Wiki Education Foundation’s powerful classroom model, have had a clear impact. To date, we’ve reached over 150 universities including more than 6,000 students and scientists. As for output, 6,306 articles have been created or edited, garnering more than 304 million views; over 2,000 scientific images have been donated; and countless new scientist-editors have been minted, many of whom will likely continue to update Wikipedia content. The most common response we got from scientists and conference organizers about the edit-a-thons was: “Can we do that again next year?”

That’s where this guide comes in.

Through collaboration, input from Wikipedians and scientists, and more than a little trial and error, we arrived at a model that can help you organize your own edit-a-thons. This informal guide captures our main takeaways and lessons learned….Our hope is that edit-a-thons will become another integral part of science conferences, just like tweetups, communication workshops and other recent outreach initiatives. This would ensure that the content of the public’s most common gateway to science research will continually improve in quality and scope.

Download: “Crowdsourcing Expertise: A working guide for organizing Wikipedia edit-a-thons at science conferences

From big data to smart data: FDA’s INFORMED initiative


Sean KhozinGeoffrey Kim & Richard Pazdur in Nature: “….Recent advances in our understanding of disease mechanisms have led to the development of new drugs that are enabling precision medicine. For example, the co-development of kinase inhibitors that target ‘driver mutations’ in metastatic non-small-cell lung cancer (NSCLC) with companion diagnostics has led to substantial improvements in the treatment of some patients. However, growing evidence suggests that most patients with metastatic NSCLC and other advanced cancers may not have tumours with single driver mutations. Furthermore, the generation of clinical evidence in genomically diverse and geographically dispersed groups of patients using traditional trial designs and multiple competing therapies is becoming more costly and challenging.

Strategies aimed at creating new efficiencies in clinical evidence generation and extending the benefits of precision medicine to larger groups of patients are driving a transformation from a reductionist approach to drug development (for example, a single drug targeting a driver mutation and traditional clinical trials) to a holistic approach (for example, combination therapies targeting complex multiomic signatures and real-world evidence). This transition is largely fuelled by the rapid expansion in the four dimensions of biomedical big data, which has created a need for greater organizational and technical capabilities (Fig. 1). Appropriate management and analysis of such data requires specialized tools and expertise in health information technology, data science and high-performance computing. For example, efforts to generate clinical evidence using real-world data are being limited by challenges such as capturing clinically relevant variables from vast volumes of unstructured content (such as physician notes) in electronic health records and organizing various structured data elements that are primarily designed to support billing rather than clinical research. So, new standards and quality-control mechanisms are needed to ensure the validity of the design and analysis of studies based on electronic health records.

Figure 1: Conceptual map of technical and organizational capacity for biomedical big data.
Conceptual map of technical and organizational capacity for biomedical big data.

Big data can be defined as having four dimensions: volume (data size), variety (data type), veracity (data noise and uncertainty) and velocity (data flow and processing). Currently, FDA approval decisions are generally based on data of limited variety, mainly from clinical trials and preclinical studies (1) that are mostly structured (2), in data sets usually no more than a few gigabytes in size (3), that are processed intermittently as part of regulatory submissions (4). The expansion of big data in the four dimensions (grey lines) calls for increasing organizational and technical capacity. This could transform big data into smart data by enabling a holistic approach to personalization of therapies that takes patient, disease and environmental characteristics into account. (Full size image (309 KB);Download PowerPoint slide (492 KB)More)”

Curating Research Data: Practical Strategies for Your Digital Repository


Two books edited by Lisa R. Johnston: “Data are becoming the proverbial coin of the digital realm: a research commodity that might purchase reputation credit in a disciplinary culture of data sharing, or buy transparency when faced with funding agency mandates or publisher scrutiny. Unlike most monetary systems, however, digital data can flow in all too great an abundance. Not only does this currency actually “grow” on trees, but it comes from animals, books, thoughts, and each of us! And that is what makes data curation so essential. The abundance of digital research data challenges library and information science professionals to harness this flow of information streaming from research discovery and scholarly pursuit and preserve the unique evidence for future use.

In two volumes—Practical Strategies for Your Digital Repository and A Handbook of Current PracticeCurating Research Data presents those tasked with long-term stewardship of digital research data a blueprint for how to curate those data for eventual reuse. Volume One explores the concepts of research data and the types and drivers for establishing digital data repositories. Volume Two guides you across the data lifecycle through the practical strategies and techniques for curating research data in a digital repository setting. Data curators, archivists, research data management specialists, subject librarians, institutional repository managers, and digital library staff will benefit from these current and practical approaches to data curation.

Digital data is ubiquitous and rapidly reshaping how scholarship progresses now and into the future. The information expertise of librarians can help ensure the resiliency of digital data, and the information it represents, by addressing how the meaning, integrity, and provenance of digital data generated by researchers today will be captured and conveyed to future researchers….(More)”

Big and open data are prompting a reform of scientific governance


Sabina Leonelli in Times Higher Education: “Big data are widely seen as a game-changer in scientific research, promising new and efficient ways to produce knowledge. And yet, large and diverse data collections are nothing new – they have long existed in fields such as meteorology, astronomy and natural history.

What, then, is all the fuss about? In my recent book, I argue that the true revolution is in the status accorded to data as research outputs in their own right. Along with this has come an emphasis on open data as crucial to excellent and reliable science.

Previously – ever since scientific journals emerged in the 17th century – data were private tools, owned by the scientists who produced them and scrutinised by a small circle of experts. Their usefulness lay in their function as evidence for a given hypothesis. This perception has shifted dramatically in the past decade. Increasingly, data are research components that can and should be made publicly available and usable.

Rather than the birth of a data-driven epistemology, we are thus witnessing the rise of a data-centric approach in which efforts to mobilise, integrate and visualise data become contributions to discovery, not just a by-product of hypothesis testing.

The rise of data-centrism highlights the challenges involved in gathering, classifying and interpreting data, and the concepts, technologies and social structures that surround these processes. This has implications for how research is conducted, organised, governed and assessed.

Data-centric science requires shifts in the rewards and incentives provided to those who produce, curate and analyse data. This challenges established hierarchies: laboratory technicians, librarians and database managers turn out to have crucial skills, subverting the common view of their jobs as marginal to knowledge production. Ideas of research excellence are also being challenged. Data management is increasingly recognised as crucial to the sustainability and impact of research, and national funders are moving away from citation counts and impact factors in evaluations.

New uses of data are forcing publishers to re-assess their business models and dissemination procedures, and research institutions are struggling to adapt their management and administration.

Data-centric science is emerging in concert with calls for increased openness in research….(More)”

Data in public health


Jeremy Berg in Science: “In 1854, physician John Snow helped curtail a cholera outbreak in a London neighborhood by mapping cases and identifying a central public water pump as the potential source. This event is considered by many to represent the founding of modern epidemiology. Data and analysis play an increasingly important role in public health today. This can be illustrated by examining the rise in the prevalence of autism spectrum disorders (ASDs), where data from varied sources highlight potential factors while ruling out others, such as childhood vaccines, facilitating wise policy choices…. A collaboration between the research community, a patient advocacy group, and a technology company (www.mss.ng) seeks to sequence the genomes of 10,000 well-phenotyped individuals from families affected by ASD, making the data freely available to researchers. Studies to date have confirmed that the genetics of autism are extremely complicated—a small number of genomic variations are closely associated with ASD, but many other variations have much lower predictive power. More than half of siblings, each of whom has ASD, have different ASD-associated variations. Future studies, facilitated by an open data approach, will no doubt help advance our understanding of this complex disorder….

A new data collection strategy was reported in 2013 to examine contagious diseases across the United States, including the impact of vaccines. Researchers digitized all available city and state notifiable disease data from 1888 to 2011, mostly from hard-copy sources. Information corresponding to nearly 88 million cases has been stored in a database that is open to interested parties without restriction (www.tycho.pitt.edu). Analyses of these data revealed that vaccine development and systematic vaccination programs have led to dramatic reductions in the number of cases. Overall, it is estimated that ∼100 million cases of serious childhood diseases have been prevented through these vaccination programs.

These examples illustrate how data collection and sharing through publication and other innovative means can drive research progress on major public health challenges. Such evidence, particularly on large populations, can help researchers and policy-makers move beyond anecdotes—which can be personally compelling, but often misleading—for the good of individuals and society….(More)”

How a Political Scientist Knows What Our Enemies Will Do (Often Before They Do)


Political scientists have now added rigorous mathematical techniques to their social-science toolbox, creating methods to explain—and even predict—the actions of adversaries, thus making society safer as well as smarter. Such techniques allowed the U.S. government to predict the fall of President Ferdinand Marcos of the Philippines in 1986, helping hatch a strategy to ease him out of office and avoid political chaos in that nation. And at Los Angeles International Airport a computer system predicts the tactical calculations of criminals and terrorists, making sure that patrols and checkpoints are placed in ways that adversaries can’t exploit.

The advances in solving the puzzle of human behavior represent a dramatic turnaround for the field of political science, notes Bruce Bueno de Mesquita, a professor of politics at New York University. “In the mid-1960s, I took a statistics course,” he recalls, “and my undergraduate advisor was appalled. He told me that I was wasting my time.” It took researchers many years of patient work, putting piece after piece of the puzzle of human behavior together, to arrive at today’s new knowledge. The result has been dramatic progress in the nation’s ability to protect its interests at home and abroad.

Social scientists have not abandoned the proven tools that Bueno de Mesquita and generations of other scholars acquired as they mastered their discipline. Rather, adding the rigor of mathematical analysis has allowed them to solve more of the puzzle. Mathematical models of human behavior let social scientists assemble a picture of the previously unnoticed forces that drive behavior—forces common to all situations, operating below the emotions, drama, and history that make each conflict unique….(More)”

Crowdsourced Science: Sociotechnical Epistemology in the e-Research Paradigm


Paper by David Watson and Luciano Floridi: “Recent years have seen a surge in online collaboration between experts and amateurs on scientific research. In this article, we analyse the epistemological implications of these crowdsourced projects, with a focus on Zooniverse, the world’s largest citizen science web portal. We use quantitative methods to evaluate the platform’s success in producing large volumes of observation statements and high impact scientific discoveries relative to more conventional means of data processing. Through empirical evidence, Bayesian reasoning, and conceptual analysis, we show how information and communication technologies enhance the reliability, scalability, and connectivity of crowdsourced e-research, giving online citizen science projects powerful epistemic advantages over more traditional modes of scientific investigation. These results highlight the essential role played by technologically mediated social interaction in contemporary knowledge production. We conclude by calling for an explicitly sociotechnical turn in the philosophy of science that combines insights from statistics and logic to analyse the latest developments in scientific research….(More)”

Big data may be reinforcing racial bias in the criminal justice system


Laurel Eckhouse at the Washington Post: “Big data has expanded to the criminal justice system. In Los Angeles, police use computerized “predictive policing” to anticipate crimes and allocate officers. In Fort Lauderdale, Fla., machine-learning algorithms are used to set bond amounts. In states across the country, data-driven estimates of the risk of recidivism are being used to set jail sentences.

Advocates say these data-driven tools remove human bias from the system, making it more fair as well as more effective. But even as they have become widespread, we have little information about exactly how they work. Few of the organizations producing them have released the data and algorithms they use to determine risk.

 We need to know more, because it’s clear that such systems face a fundamental problem: The data they rely on are collected by a criminal justice system in which race makes a big difference in the probability of arrest — even for people who behave identically. Inputs derived from biased policing will inevitably make black and Latino defendants look riskier than white defendants to a computer. As a result, data-driven decision-making risks exacerbating, rather than eliminating, racial bias in criminal justice.
Consider a judge tasked with making a decision about bail for two defendants, one black and one white. Our two defendants have behaved in exactly the same way prior to their arrest: They used drugs in the same amount, have committed the same traffic offenses, owned similar homes and took their two children to the same school every morning. But the criminal justice algorithms do not rely on all of a defendant’s prior actions to reach a bail assessment — just those actions for which he or she has been previously arrested and convicted. Because of racial biases in arrest and conviction rates, the black defendant is more likely to have a prior conviction than the white one, despite identical conduct. A risk assessment relying on racially compromised criminal-history data will unfairly rate the black defendant as riskier than the white defendant.

To make matters worse, risk-assessment tools typically evaluate their success in predicting a defendant’s dangerousness on rearrests — not on defendants’ overall behavior after release. If our two defendants return to the same neighborhood and continue their identical lives, the black defendant is more likely to be arrested. Thus, the tool will falsely appear to predict dangerousness effectively, because the entire process is circular: Racial disparities in arrests bias both the predictions and the justification for those predictions.

We know that a black person and a white person are not equally likely to be stopped by police: Evidence on New York’s stop-and-frisk policy, investigatory stops, vehicle searches and drug arrests show that black and Latino civilians are more likely to be stopped, searched and arrested than whites. In 2012, a white attorney spent days trying to get himself arrested in Brooklyn for carrying graffiti stencils and spray paint, a Class B misdemeanor. Even when police saw him tagging the City Hall gateposts, they sped past him, ignoring a crime for which 3,598 people were arrested by the New York Police Department the following year.

Before adopting risk-assessment tools in the judicial decision-making process, jurisdictions should demand that any tool being implemented undergo a thorough and independent peer-review process. We need more transparencyand better data to learn whether these risk assessments have disparate impacts on defendants of different races. Foundations and organizations developing risk-assessment tools should be willing to release the data used to build these tools to researchers to evaluate their techniques for internal racial bias and problems of statistical interpretation. Even better, with multiple sources of data, researchers could identify biases in data generated by the criminal justice system before the data is used to make decisions about liberty. Unfortunately, producers of risk-assessment tools — even nonprofit organizations — have not voluntarily released anonymized data and computational details to other researchers, as is now standard in quantitative social science research….(More)”.