Examining Civic Engagement Links to Health


Findings from the Literature and Implications for a Culture of Health by the Rand Corporation: “The Robert Wood Johnson Foundation (RWJF) is leading a pioneering effort to advance a culture of health that “enables all in our diverse society to lead healthier lives, now and for generations to come.” The RWJF Culture of Health Action Framework is divided into four Action Areas, and civic engagement (which RWJF defines broadly as participating in activities that advance the public good) is identified as one of the three drivers for the Action Area, Making Health a Shared Value, along with mindset and expectations, and sense of community. Civic engagement can serve as a mechanism for translating changes in a health-related mindset and sense of community into tangible actions that could lead to new health-promoting partnerships, improvements in community health conditions, and the degree of integration among health services and systems for better health outcomes.

The authors of this report seek a closer focus on the causal relationship between civic engagement and health and well-being — that is, whether better health and well-being might promote more civic engagement, whether civic engagement might promote health or well-being, or perhaps both.

In this report, authors conduct a structured review to understand what the scientific literature presents about the empirical relationship between health and civic engagement. The authors specifically examine whether health is a cause of civic engagement, a consequence of it, or both; what causal mechanisms underlie this link; and where there are gaps in knowledge for the field….(More)”

The Upside of Deep Fakes


Paper by Jessica M. Silbey and Woodrow Hartzog: “It’s bad. We know. The dawn of “deep fakes” — convincing videos and images of people doing things they never did or said — puts us all in jeopardy in several different ways. Professors Bobby Chesney and Danielle Citron have noted that now “false claims — even preposterous ones — can be peddled with unprecedented success today thanks to a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization.” The scholars identify a host of harms from deep fakes, ranging from people being exploited, extorted, and sabotaged, to societal harms like the erosion of democratic discourse and trust in social institutions, undermining public safety, national security, journalism, and diplomacy, deepening social divisions, and manipulation of elections. But it might not be all bad. Even beyond purported beneficial uses of deep-fake technology for education, art, and science, the looming deep-fake disaster might have a silver lining. Hear us out. We think deep fakes have an upside.

Crucial to our argument is the idea that deep fakes don’t create new problems so much as make existing problems worse. Cracks in systems, frameworks, strategies, and institutions that have been leaking for years now threaten to spring open. Journalism, education, individual rights, democratic systems, and voting protocols have long been vulnerable. Deep fakes might just be the straw that breaks them. And therein lies opportunity for repair. Below we briefly address some deep problems and how finally addressing them may also neutralize the destructive force of deep fakes. We only describe three cultural institutions – education, journalism, and representative democracy — with deep problems that could be strengthened as a response to deep fakes for greater societal gains. But we encourage readers to think up more. We have a hunch that once we harness the upside of deep fakes, we may unlock creative solutions to other sticky social and political problems…(More)”.

The Social Afterlife


Paper by Andrew Gilden: “Death is not what it used to be. With the rise of social media and advances in digital technology, postmortem decision-making increasingly involves difficult questions about the ongoing social presence of the deceased. Should a Twitter account keep tweeting? Should a YouTube singer keep singing? Should Tinder photos be swiped left for the very last time? The traditional touchstones of effective estate planning — reducing transaction costs and maximizing estate value — do little to guide this new social afterlife. Managing a person’s legacy has shifted away from questions of financial investment and asset management to questions of emotional and cultural stewardship. This Article brings together the diverse areas of law that shape a person’s legacy and develops a new framework for addressing the evolving challenges of legacy stewardship

This Article makes two main contributions. First, it identifies and critically examines the four models of stewardship that currently structure the laws of legacy: (1) the “freedom of disposition” model dominant in the laws of wills and trusts, (2) the “family inheritance” model dominant in copyright law, (3) the “public domain” model dominant in many states’ publicity rights laws, and (4) the “consumer contract” model dominant in over forty states’ new digital assets laws. Second, this Article develops a new stewardship model, which it calls the “decentered decedent.” The decentered decedent model recognizes that individuals occupy heterogenous social contexts, and it channels postmortem decision-making into each of those contexts. Unlike existing stewardship models, this new model does not try to centralize stewardship decisions in any one stakeholder — the family, the public, the market, or even the decedent themselves. Instead, the decentered decedent model distributes stewardship across the diverse, dispersed communities that we all leave behind….(More)”.

Real-time flu tracking. By monitoring social media, scientists can monitor outbreaks as they happen.


Charles Schmidt at Nature: “Conventional influenza surveillance describes outbreaks of flu that have already happened. It is based on reports from doctors, and produces data that take weeks to process — often leaving the health authorities to chase the virus around, rather than get on top of it.

But every day, thousands of unwell people pour details of their symptoms and, perhaps unknowingly, locations into search engines and social media, creating a trove of real-time flu data. If such data could be used to monitor flu outbreaks as they happen and to make accurate predictions about its spread, that could transform public-health surveillance.

Powerful computational tools such as machine learning and a growing diversity of data streams — not just search queries and social media, but also cloud-based electronic health records and human mobility patterns inferred from census information — are making it increasingly possible to monitor the spread of flu through the population by following its digital signal. Now, models that track flu in real time and forecast flu trends are making inroads into public-health practice.

“We’re becoming much more comfortable with how these models perform,” says Matthew Biggerstaff, an epidemiologist who works on flu preparedness at the US Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia.

In 2013–14, the CDC launched the FluSight Network, a website informed by digital modelling that predicts the timing, peak and short-term intensity of the flu season in ten regions of the United States and across the whole country. According to Biggerstaff, flu forecasting helps responders to plan ahead, so they can be ready with vaccinations and communication strategies to limit the effects of the virus. Encouraged by progress in the field, the CDC announced in January 2019 that it will spend US$17.5 million to create a network of influenza-forecasting centres of excellence, each tasked with improving the accuracy and communication of real-time forecasts.

The CDC is leading the way on digital flu surveillance, but health agencies elsewhere are following suit. “We’ve been working to develop and apply these models with collaborators using a range of data sources,” says Richard Pebody, a consultant epidemiologist at Public Health England in London. The capacity to predict flu trajectories two to three weeks in advance, Pebody says, “will be very valuable for health-service planning.”…(More)”.

The Internet Relies on People Working for Free


Owen Williams at OneZero: “When you buy a product like Philips Hue’s smart lights or an iPhone, you probably assume the people who wrote their code are being paid. While that’s true for those who directly author a product’s software, virtually every tech company also relies on thousands of bits of free code, made available through “open-source” projects on sites like GitHub and GitLab.

Often these developers are happy to work for free. Writing open-source software allows them to sharpen their skills, gain perspectives from the community, or simply help the industry by making innovations available at no cost. According to Google, which maintains hundreds of open-source projects, open source “enables and encourages collaboration and the development of technology, solving real-world problems.”

But when software used by millions of people is maintained by a community of people, or a single person, all on a volunteer basis, sometimes things can go horribly wrong. The catastrophic Heartbleed bug of 2014, which compromised the security of hundreds of millions of sites, was caused by a problem in an open-source library called OpenSSL, which relied on a single full-time developer not making a mistake as they updated and changed that code, used by millions. Other times, developers grow bored and abandon their projects, which can be breached while they aren’t paying attention.

It’s hard to demand that programmers who are working for free troubleshoot problems or continue to maintain software that they’ve lost interest in for whatever reason — though some companies certainly try. Not adequately maintaining these projects, on the other hand, makes the entire tech ecosystem weaker. So some open-source programmers are asking companies to pay, not for their code, but for their support services….(More)”.

The business case for integrating claims and clinical data


Claudia Williams at MedCityNews: “The path to value-based care is arduous. For health plans, their ability to manage care, assess quality, lower costs, and streamline reporting is directly impacted by access to clinical data. For providers, the same can be said due to their lack of access to claims data. 

Providers and health plans are increasingly demanding integrated claims and clinical data to drive and support value-based care programs. These organizations know that clinical and claims information from more than a single organization is the only way to get a true picture of patient care. From avoiding medication errors to enabling an evidence-based approach to treatment or identifying at-risk patients, the value of integrated claims and clinical data is immense — and will have far-reaching influence on both health outcomes and costs of care over time.

On July 30, Medicare announced the Data at the Point of Care pilot to share valuable claims data with Medicare providers in order to “fill in information gaps for clinicians, giving them a more structured and complete patient history with information like previous diagnoses, past procedures, and medication lists.” But that’s not the only example. To transition from fee-for-service to value-based care, providers and health plans have begun to partner with health data networks to access integrated clinical and claims data: 

Health plan adoption of integrated data strategy

A California health plan is partnering with one of the largest nonprofit health data networks in California, to better integrate clinical and claims data. …

Providers leveraging claims data to understand patient medication patterns 

Doctors using advanced health data networks typically see a full list of patients’ medications, derived from claims, when they treat them. With this information available, doctors can avoid dangerous drug to-drug interactions when they prescribe new medications. After a visit, they can also follow up and see if a patient actually filled a prescription and is still taking it….(More)”.

Gender Gaps in Urban Mobility


Brief of the Data 2X Big Data and Gender Brief Series by The GovLab, UNICEF, Universidad Del Desarrollo, Telefónica R&D Center, ISI Foundation, and DigitalGlobe: “Mobility is gendered. For example, the household division of labor in many societies leads women and girls to take more multi-purpose, multi-stop trips than men. Women-headed households also tend to work more in the informal sector, with limited access to transportation subsidies, and use of public transit is further reduced by the risk of violence in public spaces.

This brief summarizes a recent analysis of gendered urban mobility in 51 (out of 52) neighborhoods of Santiago, Chile, relying on the call detail records (CDRs) of a large sample of mobile phone users over a period of three months. We found that: 1) women move less overall than men; 2) have a smaller radius of movement; and 3) tend to concentrate their time in a smaller set of locations. These mobility gaps are linked to lower average incomes and fewer public and private transportation options. These insights, taken from large volumes of passively generated, inexpensive data streaming in realtime, can help policymakers design more gender inclusive urban transit systems….(More)”.

Weaponized Interdependence: How Global Economic Networks Shape State Coercion


Henry Farrell and Abraham L. Newman in International Security: “Liberals claim that globalization has led to fragmentation and decentralized networks of power relations. This does not explain how states increasingly “weaponize interdependence” by leveraging global networks of informational and financial exchange for strategic advantage. The theoretical literature on network topography shows how standard models predict that many networks grow asymmetrically so that some nodes are far more connected than others. This model nicely describes several key global economic networks, centering on the United States and a few other states. Highly asymmetric networks allow states with (1) effective jurisdiction over the central economic nodes and (2) appropriate domestic institutions and norms to weaponize these structural advantages for coercive ends. In particular, two mechanisms can be identified. First, states can employ the “panopticon effect” to gather strategically valuable information. Second, they can employ the “chokepoint effect” to deny network access to adversaries. Tests of the plausibility of these arguments across two extended case studies that provide variation both in the extent of U.S. jurisdiction and in the presence of domestic institutions—the SWIFT financial messaging system and the internet—confirm the framework’s expectations. A better understanding of the policy implications of the use and potential overuse of these tools, as well as the response strategies of targeted states, will recast scholarly debates on the relationship between economic globalization and state coercion….(More)”

Community Colleges Boost STEM Student Success Through Behavioral Nudging


Press Release: “JFF, a national nonprofit driving transformation in the American workforce and education systems, and Persistence Plus, which pairs behavioral insights with intelligent text messaging to improve student success, today released the findings from an analysis that examined the effects of personalized nudging on nearly 10,000 community college students. The study, conducted over two years at four community colleges, found that behavioral nudging had a significant impact on student persistence rates—with strong improvements among students of color and older adult learners, who are often underrepresented among graduates of STEM (science, technology, engineering, and math) programs.

“These results offer powerful evidence on the potential, and imperative, of using technology to support students during the most in-demand, and often most challenging, courses and majors,” said Maria Flynn, president and CEO of JFF. “With millions of STEM jobs going unfilled, closing the gap in STEM achievement has profound economic—and equity—implications.” 

In a multiyear initiative called “Nudging to STEM Success, which was funded by the Helmsley Charitable Trust, JFF and Persistence Plus selected four colleges to implement the nudging initiative campuswide:Lakeland Community College in Kirtland, Ohio; Lorain County Community College in Elyria, Ohio; Stark State College in North Canton, Ohio; and John Tyler Community College in Chester, Virginia.

A randomized control trial in the summer of 2017 showed that the nudges increased first-to-second-year persistence for STEM students by 10 percentage points. The results of that trial will be presented in an upcoming peer-reviewed paper titled “A Summer Nudge Campaign to Motivate Community College STEM Students to Reenroll.” The paper will be published in AERA Open, an open-access journal published by the American Educational Research Association. 

Following the 2017 trial, the four colleges scaled the support to nearly 10,000 students, and over the next two years, JFF and Persistence Plus found that the nudging support had a particularly strong impact on students of color and students over the age of 25—two groups that have historically had lower persistence rates than other students….(More)”.

To Regain Policy Competence: The Software of American Public Problem-Solving


Philip Zelikow at the Texas National Security Review: “Policymaking is a discipline, a craft, and a profession. Policymakers apply specialized knowledge — about other countries, politics, diplomacy, conflict, economics, public health, and more — to the practical solution of public problems. Effective policymaking is difficult. The “hardware” of policymaking — the tools and structures of government that frame the possibilities for useful work — are obviously important. Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.

Like policymaking, engineering is a discipline, a craft, and a profession. Engineers learn how to apply specialized knowledge — about chemistry, physics, biology, hydraulics, electricity, and more — to the solution of practical problems. Effective engineering is similarly difficult. People work hard to learn how to practice it with professional skill. But, unlike the methods taught for engineering, the software of policy work is rarely recognized or studied. It is not adequately taught. There is no canon or norms of professional practice. American policymaking is less about deliberate engineering, and is more about improvised guesswork and bureaucratized habits.

My experience is as a historian who studies the details of policy episodes and the related staff work, but also as a former official who has analyzed a variety of domestic and foreign policy issues at all three levels of American government, including federal work from different bureaucratic perspectives in five presidential administrations from Ronald Reagan to Barack Obama. From this historical and contemporary vantage point, I am struck (and a bit depressed) that the quality of U.S. policy engineering is actually much, much worse in recent decades than it was throughout much of the 20th century. This is not a partisan observation — the decline spans both Republican and Democratic administrations.

I am not alone in my observations. Francis Fukuyama recently concluded that, “[T]he overall quality of the American government has been deteriorating steadily for more than a generation,” notably since the 1970s. In the United States, “the apparently irreversible increase in the scope of government has masked a large decay in its quality.”1 This worried assessment is echoed by other nonpartisan and longtime scholars who have studied the workings of American government.2 The 2003 National Commission on Public Service observed,

The notion of public service, once a noble calling proudly pursued by the most talented Americans of every generation, draws an indifferent response from today’s young people and repels many of the country’s leading private citizens. … The system has evolved not by plan or considered analysis but by accretion over time, politically inspired tinkering, and neglect. … The need to improve performance is urgent and compelling.3

And they wrote that as the American occupation of Iraq was just beginning.

In this article, I offer hypotheses to help explain why American policymaking has declined, and why it was so much more effective in the mid-20th century than it is today. I offer a brief sketch of how American education about policy work evolved over the past hundred years, and I argue that the key software qualities that made for effective policy engineering neither came out of the academy nor migrated back into it.

I then outline a template for doing and teaching policy engineering. I break the engineering methods down into three interacting sets of analytical judgments: about assessment, design, and implementation. In teaching, I lean away from new, cumbersome standalone degree programs and toward more flexible forms of education that can pair more easily with many subject-matter specializations. I emphasize the value of practicing methods in detailed and more lifelike case studies. I stress the significance of an organizational culture that prizes written staff work of the quality that used to be routine but has now degraded into bureaucratic or opinionated dross….(More)”.