We Need a PBS for Social Media


Mark Coatney at the New York Times: “Social media is an opportunity wrapped in a problem. YouTube spreads propaganda and is toxic to children. Twitter spreads propaganda and is toxic to racial relationsFacebook spreads propaganda and is toxic to democracy itself.

Such problems aren’t surprising when you consider that all these companies operate on the same basic model: Create a product that maximizes the attention you can command from a person, collect as much data as you can about that person, and sell it.

Proposed solutions like breaking up companies and imposing regulation have been met with resistance: The platforms, understandably, worry that their profits might be reduced from staggering to merely amazing. And this may not be the best course of action anyway.

What if the problem is something that can’t be solved by existing for-profit media platforms? Maybe the answer to fixing social media isn’t trying to change companies with business models built around products that hijack our attention, and instead work to create a less toxic alternative.

Nonprofit public media is part of the answer. More than 50 years ago, President Lyndon Johnson signed the Public Broadcasting Act, committing federal funds to create public television and radio that would “be responsive to the interests of people.”

It isn’t a big leap to expand “public media” to include not just television and radio but also social media. In 2019, the definition of “media” is considerably larger than it was in 1967. Commentary on Twitter, memes on Instagram and performances on TikTok are all as much a part of the media landscape today as newspapers and television news.

Public media came out of a recognition that the broadcasting spectrum is a finite resource. TV broadcasters given licenses to use the spectrum were expected to provide programming like news and educational shows in return. But that was not enough. To make sure that some of that finite resource would always be used in the public interest, Congress established public media.

Today, the limited resource isn’t the spectrum — it’s our attention….(More)”.

A fairer way forward for AI in health care


Linda Nordling at Nature: “When data scientists in Chicago, Illinois, set out to test whether a machine-learning algorithm could predict how long people would stay in hospital, they thought that they were doing everyone a favour. Keeping people in hospital is expensive, and if managers knew which patients were most likely to be eligible for discharge, they could move them to the top of doctors’ priority lists to avoid unnecessary delays. It would be a win–win situation: the hospital would save money and people could leave as soon as possible.

Starting their work at the end of 2017, the scientists trained their algorithm on patient data from the University of Chicago academic hospital system. Taking data from the previous three years, they crunched the numbers to see what combination of factors best predicted length of stay. At first they only looked at clinical data. But when they expanded their analysis to other patient information, they discovered that one of the best predictors for length of stay was the person’s postal code. This was puzzling. What did the duration of a person’s stay in hospital have to do with where they lived?

As the researchers dug deeper, they became increasingly concerned. The postal codes that correlated to longer hospital stays were in poor and predominantly African American neighbourhoods. People from these areas stayed in hospitals longer than did those from more affluent, predominantly white areas. The reason for this disparity evaded the team. Perhaps people from the poorer areas were admitted with more severe conditions. Or perhaps they were less likely to be prescribed the drugs they needed.

The finding threw up an ethical conundrum. If optimizing hospital resources was the sole aim of their programme, people’s postal codes would clearly be a powerful predictor for length of hospital stay. But using them would, in practice, divert hospital resources away from poor, black people towards wealthy white people, exacerbating existing biases in the system.

“The initial goal was efficiency, which in isolation is a worthy goal,” says Marshall Chin, who studies health-care ethics at University of Chicago Medicine and was one of the scientists who worked on the project. But fairness is also important, he says, and this was not explicitly considered in the algorithm’s design….(More)”.

The Church of Techno-Optimism


Margaret O’Mara at the New York Times: “…But Silicon Valley does have a politics. It is neither liberal nor conservative. Nor is it libertarian, despite the dog-eared copies of Ayn Rand’s novels that you might find strewn about the cubicles of a start-up in Palo Alto.

It is techno-optimism: the belief that technology and technologists are building the future and that the rest of the world, including government, needs to catch up. And this creed burns brightly, undimmed by the anti-tech backlash. “It’s now up to all of us together to harness this tremendous energy to benefit all humanity,” the venture capitalist Frank Chen said in a November 2018 speech about artificial intelligence. “We are going to build a road to space,” Jeff Bezos declared as he unveiled plans for a lunar lander last spring. And as Elon Musk recently asked his Tesla shareholders, “Would I be doing this if I weren’t optimistic?”

But this is about more than just Silicon Valley. Techno-optimism has deep roots in American political culture, and its belief in American ingenuity and technological progress. Reckoning with that history is crucial to the discussion about how to rein in Big Tech’s seemingly limitless power.

The language of techno-optimism first appears in the rhetoric of American politics after World War II. “Science, the Endless Frontier” was the title of the soaringly techno-optimistic 1945 report by Vannevar Bush, the chief science adviser to Franklin Roosevelt and Harry Truman, which set in motion the American government’s unprecedented postwar spending on research and development. That wave of money transformed the Santa Clara Valley and turned Stanford University into an engineering powerhouse. Dwight Eisenhower filled the White House with advisers whom he called “my scientists.” John Kennedy, announcing America’s moon shot in 1962, declared that “man, in his quest for knowledge and progress, is determined and cannot be deterred.”

In a 1963 speech, a founder of Hewlett-Packard, David Packard, looked back on his life during the Depression and marveled at the world that he lived in, giving much of the credit to technological innovation unhindered by bureaucratic interference: “Radio, television, Teletype, the vast array of publications of all types bring to a majority of the people everywhere in the world information in considerable detail, about what is going on everywhere else. Horizons are opened up, new aspirations are generated.”…(More)”

The Future of Political Philosophy


Katrina Forrester in Boston Review: “Since the upheavals of the financial crisis of 2008 and the political turbulence of 2016, it has become clear to many that liberalism is, in some sense, failing. The turmoil has given pause to economists, some of whom responded by renewing their study of inequality, and to political scientists, who have since turned to problems of democracy, authoritarianism, and populism in droves. But Anglo-American liberal political philosophers have had less to say than they might have.

The silence is due in part to the nature of political philosophy today—the questions it considers worth asking and those it sidelines. Since Plato, philosophers have always asked about the nature of justice. But for the last five decades, political philosophy in the English-speaking world has been preoccupied with a particular answer to that question developed by the American philosopher John Rawls.

Rawls’s work in the mid-twentieth century ushered in a paradigm shift in political philosophy. In his wake, philosophers began exploring what justice and equality meant in the context of modern capitalist welfare states, using those concepts to describe, in impressive and painstaking detail, the ideal structure of a just society—one that turned out to closely resemble a version of postwar social democracy. Working within this framework, they have since elaborated a body of abstract moral principles that provide the philosophical backbone of modern liberalism. These ideas are designed to help us see what justice and equality demand—of our society, of our institutions, and of ourselves.

This is a story of triumph: Rawls’s philosophical project was a major success. It is not that political philosophers after Rawls didn’t disagree; fine-grained and heated arguments are what philosophers do best. But over the last few decades they built a robust consensus about the fundamental rules of the game, conceiving of themselves as engaged in a common intellectual project with a shared conceptual framework. The governing concepts and aims of political philosophy have, for generations, been more or less taken for granted.

But if modern political philosophy is bound up with modern liberalism, and liberalism is failing, it may well be time to ask whether these apparently timeless ideas outlived their usefulness….(More)”.

The Art of Values-Based Innovation for Humanitarian Action


Chris Earney & Aarathi Krishnan at SSIR: “Contrary to popular belief, innovation isn’t new to the humanitarian sector. Organizations like the Red Cross and Red Crescent have a long history of innovating in communities around the world. Humanitarians have worked both on a global scale—for example, to innovate financing and develop the Humanitarian Code of Conduct—and on a local level—to reduce urban fire risks in informal settlements in Kenya, for instance, and improve waste management to reduce flood risks in Indonesia.

Even in its more-bureaucratic image more than 50 years ago, the United Nations commissioned a report to better understand the role that innovation, science, and technology could play in advancing human rights and development. Titled the “Sussex Manifesto,” the report outlined how to reshape and reorganize the role of innovation and technology so that it was more relevant, equitable, and accessible to the humanitarian and development sectors. Although those who commissioned the manifesto ultimately deemed it too ambitious for its era, the effort nevertheless reflects the UN’s longstanding interest in understanding how far-reaching ideas can elicit fundamental and needed progress. It challenged the humanitarian system to be explicit about its values and understand how those values could lead to radical actions for the betterment of humanity.

Since then, 27 UN organizations have formed teams dedicated to supporting innovation. Today, the aspiration to innovate extends to NGOs and donor communities, and has led to myriad approaches to brainstorming, design thinking, co-creation, and other activities developed to support novelty.

However, in the face of a more-globalized, -connected, and -complex world, we need to, more than ever, position innovation as a bold and courageous way of doing things. It’s common for people to demote innovation as a process that tinkers around the edges of organizations, but we need to think about innovation as a tool for changing the way systems work and our practices so that they better serve communities. This matters, because humanitarian needs are only going to grow, and the resources available to us likely won’t match that need. When the values that underpin our attitudes and behaviors as humanitarians drive innovation, we can better focus our efforts and create more impact with less—and we’re going to have to…(More)”.

Citizens need to know numbers


David Spiegelhalter at Aeon: “…Many criticised the Leave campaign for its claim that Britain sends the EU £350 million a week. When Boris Johnson repeated it in 2017 – by which time he was Foreign Secretary – the chair of the UK Statistics Authority (the official statistical watchdog) rebuked him, noting it was a ‘clear misuse of official statistics’. A private criminal prosecution was even made against Johnson for ‘misconduct in a public office’, but it was halted by the High Court.

The message on the bus had a strong emotional resonance with millions of people, even though it was essentially misinformation. The episode demonstrates both the power and weakness of statistics: they can be used to amplify an entire worldview, and yet they often do not stand up to scrutiny. This is why statistical literacy is so important – in an age in which data plays an ever-more prominent role in society, the ability to spot ways in which numbers can be misused, and to be able to deconstruct claims based on statistics, should be a standard civic skill.

Statistics are not cold hard facts – as Nate Silver writes in The Signal and the Noise (2012): ‘The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.’ Not only has someone used extensive judgment in choosing what to measure, how to define crucial ideas, and to analyse them, but the manner in which they are communicated can utterly change their emotional impact. Let’s assume that £350 million is the actual weekly contribution to the EU. I often ask audiences to suggest what they would put on the side of the bus if they were on the Remain side. A standard option for making an apparently big number look small is to consider it as a proportion of an even bigger number: for example, the UK’s GDP is currently around £2.3 trillion, and so this contribution would comprise less than 1 per cent of GDP, around six months’ typical growth. An alternative device is to break down expenditure into smaller, more easily grasped units: for example, as there are 66 million people in the UK, £350 million a week is equivalent to around 75p a day, less than $1, say about the cost of a small packet of crisps (potato chips). If the bus had said: We each send the EU the price of a packet of crisps each day, the campaign might not have been so successful.

Numbers are often used to persuade rather than inform, statistical literacy needs to be improved, and so surely we need more statistics courses in schools and universities? Well, yes, but this should not mean more of the same. After years of researching and teaching statistical methods, I am not alone in concluding that the way in which we teach statistics can be counterproductive, with an overemphasis on mathematical foundations through probability theory, long lists of tests and formulae to apply, and toy problems involving, say, calculating the standard deviation of the weights of cod. The American Statistical Association’s Guidelines for Assessment and Instruction in Statistics Education (2016) strongly recommended changing the pedagogy of statistics into one based on problemsolving, real-world examples, and with an emphasis on communication….(More)”.

The business case for integrating claims and clinical data


Claudia Williams at MedCityNews: “The path to value-based care is arduous. For health plans, their ability to manage care, assess quality, lower costs, and streamline reporting is directly impacted by access to clinical data. For providers, the same can be said due to their lack of access to claims data. 

Providers and health plans are increasingly demanding integrated claims and clinical data to drive and support value-based care programs. These organizations know that clinical and claims information from more than a single organization is the only way to get a true picture of patient care. From avoiding medication errors to enabling an evidence-based approach to treatment or identifying at-risk patients, the value of integrated claims and clinical data is immense — and will have far-reaching influence on both health outcomes and costs of care over time.

On July 30, Medicare announced the Data at the Point of Care pilot to share valuable claims data with Medicare providers in order to “fill in information gaps for clinicians, giving them a more structured and complete patient history with information like previous diagnoses, past procedures, and medication lists.” But that’s not the only example. To transition from fee-for-service to value-based care, providers and health plans have begun to partner with health data networks to access integrated clinical and claims data: 

Health plan adoption of integrated data strategy

A California health plan is partnering with one of the largest nonprofit health data networks in California, to better integrate clinical and claims data. …

Providers leveraging claims data to understand patient medication patterns 

Doctors using advanced health data networks typically see a full list of patients’ medications, derived from claims, when they treat them. With this information available, doctors can avoid dangerous drug to-drug interactions when they prescribe new medications. After a visit, they can also follow up and see if a patient actually filled a prescription and is still taking it….(More)”.

Complex Systems Change Starts with Those Who Use the Systems


Madeleine Clarke & John Healy at Stanford Social Innovation Review: “Philanthropy, especially in the United States and Europe, is increasingly espousing the idea that transformative shifts in social care, education, and health systems are needed. Yet successful examples of systems-level reform are rare. Concepts such as collective impact (funder-driven, cross-sector collaboration), implementation science (methods to promote the systematic uptake of research findings), and catalytic philanthropy (funders playing a powerful role in mobilizing fundamental reforms) have gained prominence as pathways to this kind of change. These approaches tend to characterize philanthropy—usually foundations—as the central, heroic actor. Meanwhile, research on change within social and health services continues to indicate that deeply ingrained beliefs and practices, such as overly medicalized models of care for people with intellectual disabilities, and existing resource distribution, which often maintains the pay and conditions of professional groups, inhibits the introduction of reform into complex systems. A recent report by RAND, for example, showed that a $1 billion, seven-year initiative to improve teacher performance failed, and cited the complexity of the system and practitioners’ resistance to change as possible explanations. 

We believe the most effective way to promote systems-level social change is to place the voices of people who use social services—the people for whom change matters most—at the center of change processes. But while many philanthropic organizations tout the importance of listening to the “end beneficiaries” or “service users,” the practice nevertheless remains an underutilized methodology for countering systemic obstacles to change and, ultimately, reforming complex systems….(More)”.

The Why of the World


Book review by Tim Maudlin of The Book of Why: The New Science of Cause and Effect by Judea Pearl and Dana Mackenzie: “Correlation is not causation.” Though true and important, the warning has hardened into the familiarity of a cliché. Stock examples of so-called spurious correlations are now a dime a dozen. As one example goes, a Pacific island tribe believed flea infestations to be good for one’s health because they observed that healthy people had fleas while sick people did not. The correlation is real and robust, but fleas do not cause health, of course: they merely indicate it. Fleas on a fevered body abandon ship and seek a healthier host. One should not seek out and encourage fleas in the quest to ward off sickness.

The rub lies in another observation: that the evidence for causation seems to lie entirely in correlations. But for seeing correlations, we would have no clue about causation. The only reason we discovered that smoking causes lung cancer, for example, is that we observed correlations in that particular circumstance. And thus a puzzle arises: if causation cannot be reduced to correlation, how can correlation serve as evidence of causation?

The Book of Why, co-authored by the computer scientist Judea Pearl and the science writer Dana Mackenzie, sets out to give a new answer to this old question, which has been around—in some form or another, posed by scientists and philosophers alike—at least since the Enlightenment. In 2011 Pearl won the Turing Award, computer science’s highest honor, for “fundamental contributions to artificial intelligence through the development of a calculus of probabilistic and causal reasoning,” and this book sets out to explain what all that means for a general audience, updating his more technical book on the same subject, Causality, published nearly two decades ago. Written in the first person, the new volume mixes theory, history, and memoir, detailing both the technical tools of causal reasoning Pearl has developed as well as the tortuous path by which he arrived at them—all along bucking a scientific establishment that, in his telling, had long ago contented itself with data-crunching analysis of correlations at the expense of investigation of causes. There are nuggets of wisdom and cautionary tales in both these aspects of the book, the scientific as well as the sociological…(More)”.

How to Build Artificial Intelligence We Can Trust


Gary Marcus and Ernest Davis at the New York Times: “Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn’t yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon’s facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today’s A.I. needs to get better at what it does. The problem is that today’s A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality….

We face a choice. We can stick with today’s approach to A.I. and greatly restrict what the machines are allowed to do (lest we end up with autonomous-vehicle crashes and machines that perpetuate bias rather than reduce it). Or we can shift our approach to A.I. in the hope of developing machines that have a rich enough conceptual understanding of the world that we need not fear their operation. Anything else would be too risky….(More)”.