Kyle Jahner at the Army Times: “You wouldn’t think that moving the salad bar to the front of the chow hall and moving the dessert bar back 10 feet would make the Army healthier. But at Fort Campbell, Kentucky, that bumped up salad sales about 24 percent and dessert sales down 10 percent, a nudge toward goals of soldiers eating, exercising and sleeping healthier.
That’s just an example of the kind of change Army Medical Command hopes to inspire and successes it hopes to share across installations through its first annual Health of the Force report.
“I’m pretty proud of what we’ve been able to accomplish with this inaugural report,” said Col. Deydre Teyhen during a recent roundtable at Defense Health Agency headquarters in Falls Church, Virginia. “I think we can’t get to a better state of health unless we inform people of what’s working out there in the field.”
The Army hopes to reduce the figure of 17 percent of soldiers not medically deployable within 72 hours. …The overarching philosophy of these recent MEDCOM efforts is to improve overall health rather than play whack-a-mole with problems as they arise. Teyhen pointed out that the average soldier is a patient at a health care facility for about 100 minutes per year, and the trick is to influence soldier health choices over the other 525,500 minutes, extending influence outside of brick-and-mortar health facilities.It dovetails with the Army’s Performance Triad, the plan to improve readiness through sleep, nutrition and exercise….(More)”
Giulio Quaggiotto at Nesta: “Over the past decade we’ve seen an explosion in the amount of data we create, with more being captured about our lives than ever before. As an industry, the public sector creates an enormous amount of information – from census data to tax data to health data. When it comes to use of the data however, despite many initiatives trying to promote open and big data for public policy as well as evidence-based policymaking, we feel there is still a long way to go.
Why is that? Data initiatives are often created under the assumption that if data is available, people (whether citizens or governments) will use it. But this hasn’t necessarily proven to be the case, and this approach neglects analysis of power and an understanding of the political dynamics at play around data (particularly when data is seen as an output rather than input).
Many data activities are also informed by the ‘extractive industry’ paradigm: citizens and frontline workers are seen as passive ‘data producers’ who hand over their information for it to be analysed and mined behind closed doors by ‘the experts’.
Given budget constraints facing many local and central governments, even well intentioned initiatives often take an incremental, passive transparency approach (i.e. let’s open the data first then see what happens), or they adopt a ‘supply/demand’ metaphor to data provision and usage…..
As a response to these issues, this blog series will explore the hypothesis that putting the question of citizen and government agency – rather than openness, volume or availability – at the centre of data initiatives has the potential to unleash greater, potentially more disruptive innovation and to focus efforts (ultimately leading to cost savings).
Our argument will be that data innovation initiatives should be informed by the principles that:
People closer to the problem are the best positioned to provide additional context to the data and potentially act on solutions (hence the importance of “thick data“).
Citizens are active agents rather than passive providers of ‘digital traces’.
Governments are both users and providers of data.
We should ask at every step of the way how can we empower communities and frontline workers to take better decisions over time, and how can we use data to enhance the decision making of every actor in the system (from government to the private sector, from private citizens to social enterprises) in their role of changing things for the better… (More)
Jesse Dunietz at Nautilus: “…A feverish push for “big data” analysis has swept through biology, linguistics, finance, and every field in between. Although no one can quite agree how to define it, the general idea is to find datasets so enormous that they can reveal patterns invisible to conventional inquiry. The data are often generated by millions of real-world user actions, such as tweets or credit-card purchases, and they can take thousands of computers to collect, store, and analyze. To many companies and researchers, though, the investment is worth it because the patterns can unlock information about anything from genetic disorders to tomorrow’s stock prices.
But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus—and the reasons why should give us pause about any research that blindly trusts big data.
In the case of language and culture, big data showed up in a big way in 2011, when Google released itsNgrams tool. Announced with fanfare in the journal Science, Google Ngrams allowed users to search for short phrases in Google’s database of scanned books—about 4 percent of all books ever published!—and see how the frequency of those phrases has shifted over time. The paper’s authors heralded the advent of “culturomics,” the study of culture based on reams of data and, since then, Google Ngrams has been, well, largely an endless source of entertainment—but also a goldmine for linguists, psychologists, and sociologists. They’ve scoured its millions of books to show that, for instance, yes, Americans are becoming more individualistic; that we’re “forgetting our past faster with each passing year”; and that moral ideals are disappearing from our cultural consciousness.
WE’RE LOSING HOPE: An Ngrams chart for the word “hope,” one of many intriguing plots found by xkcd author Randall Munroe. If Ngrams really does reflect our culture, we may be headed for a dark place.
The problems start with the way the Ngrams corpus was constructed. In a study published last October, three University of Vermont researchers pointed out that, in general, Google Books includes one copy of every book. This makes perfect sense for its original purpose: to expose the contents of those books to Google’s powerful search technology. From the angle of sociological research, though, it makes the corpus dangerously skewed….
Even once you get past the data sources, there’s still the thorny issue of interpretation. Sure, words like “character” and “dignity” might decline over the decades. But does that mean that people care about morality less? Not so fast, cautions Ted Underwood, an English professor at the University of Illinois, Urbana-Champaign. Conceptions of morality at the turn of the last century likely differed sharply from ours, he argues, and “dignity” might have been popular for non-moral reasons. So any conclusions we draw by projecting current associations backward are suspect.
Of course, none of this is news to statisticians and linguists. Data and interpretation are their bread and butter. What’s different about Google Ngrams, though, is the temptation to let the sheer volume of data blind us to the ways we can be misled.
This temptation isn’t unique to Ngrams studies; similar errors undermine all sorts of big data projects. Consider, for instance, the case of Google Flu Trends (GFT). Released in 2008, GFT would count words like “fever” and “cough” in millions of Google search queries, using them to “nowcast” how many people had the flu. With those estimates, public health officials could act two weeks before the Centers for Disease Control could calculate the true numbers from doctors’ reports.
When big data isn’t seen as a panacea, it can be transformative.
Initially, GFT was claimed to be 97 percent accurate. But as a study out of Northeastern University documents, that accuracy was a fluke. First, GFT completely missed the “swine flu” pandemic in the spring and summer of 2009. (It turned out that GFT was largely predicting winter.) Then, the system began to overestimate flu cases. In fact, it overshot the peak 2013 numbers by a whopping 140 percent. Eventually, Google just retired the program altogether.
So what went wrong? As with Ngrams, people didn’t carefully consider the sources and interpretation of their data. The data source, Google searches, was not a static beast. When Google started auto-completing queries, users started just accepting the suggested keywords, distorting the searches GFT saw. On the interpretation side, GFT’s engineers initially let GFT take the data at face value; almost any search term was treated as a potential flu indicator. With millions of search terms, GFT was practically guaranteed to over-interpret seasonal words like “snow” as evidence of flu.
But when big data isn’t seen as a panacea, it can be transformative. Several groups, like Columbia University researcher Jeffrey Shaman’s, for example, have outperformed the flu predictions of both the CDC and GFT by using the former to compensate for the skew of the latter. “Shaman’s team tested their model against actual flu activity that had already occurred during the season,” according to the CDC. By taking the immediate past into consideration, Shaman and his team fine-tuned their mathematical model to better predict the future. All it takes is for teams to critically assess their assumptions about their data….(More)
Jennifer L. Matjasko, et al in the American Journal of Preventive Medicine: “From the beginning, health has been recognized as a fertile area for applying nudges. The subtitle of the bookNudge is Improving Decisions about Health, Wealth, and Happiness. In their discussion of health behaviors, Thaler and Sunstein propose new nudges in health, such as simplifying decision making in Medicare. In fact, section 1511 of the Affordable Care Act requires large employers to automatically enroll workers into health insurance; similar to the previous example on organ donation, this switched from an opt-in to an opt-out system in order to harness the power of defaults. We will provide examples in which concepts from behavioral economics were applied to public health policy and led to improvements in health attitudes and behaviors. A summary of these applications is provided in Table 1.
Nudges can be effective because people are influenced by stimuli that are visible and new; thus, at least in theory, small changes can lead to behavior modification. Several studies have found that simply prompting (nudging) individuals to make a plan increases the probability of the subject eventually engaging in the prompted health behavior, such as immunizations, healthy eating, and cancer screening. For example, one study found that e-mailing patients appointment times and locations for their next influenza vaccination increased vaccination rates by 36%. Another intervention was even simpler. Rather than assigning a date and time for the patient to be vaccinated, patients were simply mailed a card that asked the patient to write down the day or day and time they planned to get the influenza vaccine (they were also sent the day and time of the free influenza vaccine clinics). Relative to a control condition (people who only received the information about the day and time of the clinics), those prompted to write down the day and time they planned to get the influenza vaccine were 4.2 percentage points (12.7%) more likely to receive the vaccine at those clinics. Those prompted to write down the date but not the time were not significantly more likely to be vaccinated at the clinics. Decision heuristics, such as highlighting consensus, may also help. Highlighting descriptive norms among a group of trusted experts, or priming (e.g., that 90% of doctors agree that vaccines are safe) can significantly reduce public concern about (childhood) vaccines and promote intentions to vaccinate.
The significant influence of framing has been demonstrated in many public health domains, such as messaging about blood transfusion, smoking cessation,sunscreen use, and mammography utilization.In particular, gains-framed messages (i.e., emphasizing the health gains of a behavior or treatment) were more likely to have a positive impact on the attitudes toward activities targeting prevention (e.g., blood safety, sunscreen use, smoking cessation). Loss-based messages may be more effective at encouraging screening behaviors, such as mammography screening. This points to the importance of testing messages for the uptake of preventive services among varying subgroups, many of which are now covered without cost-sharing as a result of the coverage of preventive services mandated in the Affordable Care Act.
David Lang on how citizen science bridges the gap between science and society: “It’s hard to find a silver lining in the water crisis in Flint, Michigan. The striking images of jugs of brown water being held high in protest are a symbol of institutional failure on a grand scale. It’s a disaster. But even as questions of accountability and remedy remain unanswered, there is already one lesson we can take away: Citizen science can be used as a powerful tool to build (or rebuild) the public’s trust in science.
Because the other striking image from Flint is this: Citizen-scientists sampling and testing their own water, from their homes and neighborhoods,and reporting the results as scientific data. Dr. Marc Edwards is the VirginiaTech civil engineering professor who led the investigation into the lead levels in Flint’s water supply, and in a February 2016 interview with TheChronicle of Higher Education, he gave an important answer about the methods his team used to obtain the data: “Normal people really appreciate good science that’s done in their interest. They stepped forward as citizen-scientists to explore what was happening to them and to their community,we provided some funding and the technical and analytical expertise, and they did all the work. I think that work speaks for itself.”
It’s a subtle but important message: The community is rising up and rallying by using science, not by reacting to it. Other scientists trying to highlight important issues and influence public opinion would do well to take note, because there’s a disconnect between what science reports and what the general public chooses to believe. For instance, 97 percent of scientists agree that the world’s climate is warming, likely due to human activities. Yet only 70 percent of Americans believe that global warming is real. Many of the most important issues of our time have the same, growing gap between scientific and societal consensus: genetically modified foods, evolution,vaccines are often widely distrusted or disputed despite strong, positive scientific evidence…..
The good news is that we’re learning. Citizen science — the growing trend of involving non-professional scientists in the process of discovery — is proving to be a supremely effective tool. It now includes far more than birders and backyard astronomers, its first amateur champions. Over the past few years,the discipline has been gaining traction and popularity in academic circles too. Involving groups of amateur volunteers is now a proven strategy for collecting data over large geographic areas or over long periods of time.Online platforms like Zooniverse have shown that even an untrained human eye can spot anomalies in everything from wildebeest migrations to Martiansurfaces. For certain types of research, citizen science just works.
While a long list of peer-reviewed papers now backs up the efficacy of citizen science, and a series of papers has shown its positive impact on students’ view of science, we’re just beginning to understand the impact of that participation on the wider perception of science. Truthfully, for now,most of what we know so far about its public impact is anecdotal, as in the work in Flint, or even on our online platform for explorers, OpenExplorer….It makes sense that citizen science should affect public perception of science.The difference between “here are the results of a study” and “please help
It makes sense that citizen science should affect public perception of science.The difference between “here are the results of a study” and “please help us in the process of discovery” is profound. It’s the difference between a rote learning moment and an immersive experience. And even if not everyone is getting involved, the fact that this is possible and that some members of a community are engaging makes science instantly more relatable. It creates what Tim O’Reilly calls an “architecture of participation.” Citizen scientists create the best interface for convincing the rest of the populace.
A recent article in Nature argued that the DIY biology community was, in fact, ahead of the scientific establishment in terms of proactively thinking about the safety and ethics of rapidly advancing biotechnology tools. They had to be. For those people opening up community labs so that anyone can come and participate, public health issues can’t be pushed aside or dealt with later. After all, they are the public that will be affected….(More)”
USA Gov: “There’s an app for everything in this digital age, including hundredsdeveloped by the federal government. Here are six apps that we foundespecially useful.
Smart Traveler– Planning a trip out of the country this year? SmartTraveler by the State Department is great for all your trips abroad. Getthe latest travel alerts and information on every country, includinghow to find and contact each U.S. Embassy.
FoodKeeper – Ever wonder how long you should cook chicken or howlong food can sit in the fridge before it goes bad? The U.S. Departmentof Agriculture’s FoodKeeper is the tool for you. Not only can you findresources on food safety and post reminders of how long food willremain safe to eat, you can also ask a food safety specialist questions 24/7.
FEMA App – The FEMA app helps you learn how to prepare for and respond to disasters. It includes weather alerts, tipsfor building a basic emergency supply kit, and contact information for applying for assistance and finding local sheltersand disaster recovery centers. Stay safe and know what to do when disasters happen.
IRS2GO – Tax season is here. This IRS app can help you track the status of your refund, make a payment, or find taxpreparation assistance, sometimes for free.
CDC Influenza App-Stay on top of the flu this season and get the latest updates from this official Centers for DiseaseControl and Prevention app. It’s great for health practitioners, teachers, and parents, and includes tips for avoiding the fluand maps of influenza activity.
Dwellr– Have you ever wondered what U.S. city might best suit you? Then the Dwellr app is just for you. When you firstopen the app, you’re guided through an interactive survey, to better understand your ideal places to live based on datagathered by the Census Bureau….(More)”
Paper by Zachary F. Meisel, Lauren A. Houdek VonHoltz, and Raina M. Merchant in Healthcare: “Efforts to improve health care price transparency have garnered significant attention from patients, policy makers, and health insurers. In response to increasing consumer demand, state governments, insurance plans, and health care providers are reporting health care prices. However, such data often do not provide consumers with the most salient information: their own actual out-of-pocket cost for medical care. Although untested, crowdsourcing, a mechanism for the public to help answer complex questions, represents a potential solution to the problem of opaque hospital costs. This article explores, the challenges and potential opportunities for crowdsourcing out-of-pocket costs for healthcare consumers….(More)”.
Kaveh Waddell in the Atlantic: “Big data can help solve problems that are too big for one person to wrap their head around. It’s helped businesses cut costs, cities plan new developments, intelligence agencies discover connections between terrorists, health officials predict outbreaks, and police forces get ahead of crime. Decision-makers are increasingly told to “listen to the data,” and make choices informed by the outputs of complex algorithms.
But when the data is about humans—especially those who lack a strong voice—those algorithms can become oppressive rather than liberating. For many poor people in the U.S., the data that’s gathered about them at every turn can obstruct attempts to escape poverty.
Low-income communities are among the most surveilled communities in America. And it’s not just the police that are watching, says Michele Gilman, a law professor at the University of Baltimore and a former civil-rights attorney at the Department of Justice. Public-benefits programs, child-welfare systems, and monitoring programs for domestic-abuse offenders all gather large amounts of data on their users, who are disproportionately poor.
In certain places, in order to qualify for public benefits like food stamps, applicants have to undergo fingerprinting and drug testing. Once people start receiving the benefits, officials regularly monitor them to see how they spend the money, and sometimes check in on them in their homes.
Data gathered from those sources can end up feeding back into police systems, leading to a cycle of surveillance. “It becomes part of these big-data information flows that most people aren’t aware they’re captured in, but that can have really concrete impacts on opportunities,” Gilman says.
Once an arrest crops up on a person’s record, for example, it becomes much more difficult for that person to find a job, secure a loan, or rent a home. And that’s not necessarily because loan officers or hiring managers pass over applicants with arrest records—computer systems that whittle down tall stacks of resumes or loan applications will often weed some out based on run-ins with the police.
When big-data systems make predictions that cut people off from meaningful opportunities like these, they can violate the legal principle of presumed innocence, according to Ian Kerr, a professor and researcher of ethics, law, and technology at the University of Ottawa.
Outside the court system, “innocent until proven guilty” is upheld by people’s due-process rights, Kerr says: “A right to be heard, a right to participate in one’s hearing, a right to know what information is collected about me, and a right to challenge that information.” But when opaque data-driven decision-making takes over—what Kerr calls “algorithmic justice”—some of those rights begin to erode….(More)”
]Book by Calestous Juma: “The rise of artificial intelligence has rekindled a long-standing debate regarding the impact of technology on employment. This is just one of many areas where exponential advances in technology signal both hope and fear, leading to public controversy. This book shows that many debates over new technologies are framed in the context of risks to moral values, human health, and environmental safety. But it argues that behind these legitimate concerns often lie deeper, but unacknowledged, socioeconomic considerations. Technological tensions are often heightened by perceptions that the benefits of new technologies will accrue only to small sections of society while the risks will be more widely distributed. Similarly, innovations that threaten to alter cultural identities tend to generate intense social concern. As such, societies that exhibit great economic and political inequities are likely to experience heightened technological controversies.
Drawing from nearly 600 years of technology history, Innovation and Its Enemies identifies the tension between the need for innovation and the pressure to maintain continuity, social order, and stability as one of today’s biggest policy challenges. It reveals the extent to which modern technological controversies grow out of distrust in public and private institutions. Using detailed case studies of coffee, the printing press, margarine, farm mechanization, electricity, mechanical refrigeration, recorded music, transgenic crops, and transgenic animals, it shows how new technologies emerge, take root, and create new institutional ecologies that favor their establishment in the marketplace. The book uses these lessons from history to contextualize contemporary debates surrounding technologies such as artificial intelligence, online learning, 3D printing, gene editing, robotics, drones, and renewable energy. It ultimately makes the case for shifting greater responsibility to public leaders to work with scientists, engineers, and entrepreneurs to manage technological change, make associated institutional adjustments, and expand public engagement on scientific and technological matters….(More)”
John Wilbanks& Stephen H Friend in Nature Biotechnology: “To upend current barriers to sharing clinical data and insights, we need a framework that not only accounts for choices made by trial participants but also qualifies researchers wishing to access and analyze the data.
This March, Sage Bionetworks (Seattle) began sharing curated data collected from >9,000 participants of mPower, a smartphone-enabled health research study for Parkinson’s disease. The mPower study is notable as one of the first observational assessments of human health to rapidly achieve scale as a result of its design and execution purely through a smartphone interface. To support this unique study design, we developed a novel electronic informed consent process that includes participant-determined data-sharing preferences. It is through these preferences that the new data—including self-reported outcomes and quantitative sensor data—are shared broadly for secondary analysis. Our hope is that by sharing these data immediately, prior even to our own complete analysis, we will shorten the time to harnessing any utility that this study’s data may hold to improve the condition of patients who suffer from this disease.
Turbulent times for data sharing
Our release of mPower comes at a turbulent time in data sharing. The power of data for secondary research is top of mind for many these days. Vice President Joe Biden, in heading President Barack Obama’s ambitious cancer ‘moonshot’, describes data sharing as second only to funding to the success of the effort. However, this powerful support for data sharing stands in opposition to the opinions of many within the research establishment. To wit, the august New England Journal of Medicine (NEJM)’s recent editorial suggesting that those who wish to reuse clinical trial data without the direct participation and approval of the original study team are “research parasites”4. In the wake of colliding perspectives on data sharing, we must not lose sight of the scientific and societal ends served by such efforts.
It is important to acknowledge that meaningful data sharing is a nontrivial process that can require substantial investment to ensure that data are shared with sufficient context to guide data users. When data analysis is narrowly targeted to answer a specific and straightforward question—as with many clinical trials—this added effort might not result in improved insights. However, many areas of science, such as genomics, astronomy and high-energy physics, have moved to data collection methods in which large amounts of raw data are potentially of relevance to a wide variety of research questions, but the methodology of moving from raw data to interpretation is itself a subject of active research….(More)”