The ambitious effort to piece together America’s fragmented health data


Nicole Wetsman at The Verge: “From the early days of the COVID-19 pandemic, epidemiologist Melissa Haendel knew that the United States was going to have a data problem. There didn’t seem to be a national strategy to control the virus, and cases were springing up in sporadic hotspots around the country. With such a patchwork response, nationwide information about the people who got sick would probably be hard to come by.

Other researchers around the country were pinpointing similar problems. In Seattle, Adam Wilcox, the chief analytics officer at UW Medicine, was reaching out to colleagues. The city was the first US COVID-19 hotspot. “We had 10 times the data, in terms of just raw testing, than other areas,” he says. He wanted to share that data with other hospitals, so they would have that information on hand before COVID-19 cases started to climb in their area. Everyone wanted to get as much data as possible in the hands of as many people as possible, so they could start to understand the virus.

Haendel was in a good position to help make that happen. She’s the chair of the National Center for Data to Health (CD2H), a National Institutes of Health program that works to improve collaboration and data sharing within the medical research community. So one week in March, just after she’d started working from home and pulled her 10th grader out of school, she started trying to figure out how to use existing data-sharing projects to help fight this new disease.

The solution Haendel and CD2H landed on sounds simple: a centralized, anonymous database of health records from people who tested positive for COVID-19. Researchers could use the data to figure out why some people get very sick and others don’t, how conditions like cancer and asthma interact with the disease, and which treatments end up being effective.

But in the United States, building that type of resource isn’t easy. “The US healthcare system is very fragmented,” Haendel says. “And because we have no centralized healthcare, that makes it also the case that we have no centralized healthcare data.” Hospitals, citing privacy concerns, don’t like to give out their patients’ health data. Even if hospitals agree to share, they all use different ways of storing information. At one institution, the classification “female” could go into a record as one, and “male” could go in as two — and at the next, they’d be reversed….(More)”.

High-Stakes AI Decisions Need to Be Automatically Audited


Oren Etzioni and Michael Li in Wired: “…To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. Those hypothetical cases can be either synthetic or real—allowing automated, instantaneous, fine-grained interrogation of the model. It’s a straightforward way to monitor AI systems for signs of bias or brittleness: What happens if we change the gender of a defendant? What happens if the loan applicants reside in a historically minority neighborhood?

Auditable AI has several advantages over explainable AI. Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. Second, this means the producers of the software do not have to expose trade secrets of their proprietary systems and data sets. Thus, AI audits will likely face less resistance.

Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Say Netflix recommends The Twilight Zone because I watched Stranger Things. Will it also recommend other science fiction horror shows? Does it recommend The Twilight Zone to everyone who’s watched Stranger Things?

Early examples of auditable AI are already having a positive impact. The ACLU recently revealed that Amazon’s auditable facial-recognition algorithms were nearly twice as likely to misidentify people of color. There is growing evidence that public audits can improve model accuracy for under-represented groups.

In the future, we can envision a robust ecosystem of auditing systems that provide insights into AI. We can even imagine “AI guardians” that build external models of AI systems based on audits. Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces.

Auditable AI is not a panacea. If an AI system is performing a cancer diagnostic, the patient will still want an accurate and understandable explanation, not just an audit. Such explanations are the subject of ongoing research and will hopefully be ready for commercial use in the near future. But in the meantime, auditable AI can increase transparency and combat bias….(More)”.

How smart cities are boosting citizen engagement


Article by  Joe Appleton: “…many governments are implementing new and exciting ideas to try and boost citizen engagement and overcome the obstacles that prevent citizen involvement. Here are a few examples of how cities are engaging with citizens in the 21st century.

REVOLUTIONIZING CITY HALL

The city of San Francisco has been working hard to improve resident participation. To help solve city-wide problems, the city created a program called Civic Bridge. Civic Bridge is a platform that can be used to bring together residents and volunteers from the private sector with city staff. This allows city hall to work closely with private sector professionals to solve public challenges.

By enlisting the help of hundreds of otherwise unreachable residents, solutions to city problems such as homelessness, access to healthcare, and other social issues, fast and effective results could be produced.

Civy is another program that has been designed to put city officials directly in touch with residents. Civy is a cloud-based platform that gives citizens a voice, in a confidential environment, that allows citizens to add their thoughts and opinions on citywide projects, helping officials make better-informed decisions.

REMOVING BARRIERS

Physically traveling to a city hall can be an immense barrier to citizen participation. However, some innovative cities are taking steps to bring city hall into resident’s homes. To do this, they are enlisting help from platforms such as CitizenLab. CitizenLab was first launched in 2016, and it has proven itself to be a practical medium for many European cities. The platform boosts citizen engagement by sending data directly to members of the public via a user-friendly mobile interface. Officials can see the results from surveys and questionnaires in real-time, and use the data collected to make decisions based on real citizen insights.

Civocracy is a similar digital platform that has been designed to promote citizen participation, champion collaborative governance projects, and improve city hall efficiency. It focuses on direct communication between residents and officials, giving citizen’s a platform to discuss projects and allow officials to get ideas from the public. This service is currently being used in Amsterdam, Nice, Potsdam, Brussels, Lyon, and many other European cities.

Platforms like these are essential for removing the obstacles that many citizens face when interacting with city governments. As a result, cities can enjoy a more citizen-centric form of smart government.

BOOSTING PARTICIPATION

There’s more to citizen engagement than giving and receiving feedback for ideas and projects. To boost participation, some cities have really embraced 21st century trends. 

For example, two cities in the UK (London and Plymouth) have been experimenting with crowdfunding for potential city projects. Proposals for urban projects are listed on popular crowdfunding websites, in an open and transparent manner, allowing residents and investors to directly contribute funds to projects and initiatives that they’re interested in. In some cases, the local authorities will support winning proposals by matching the raised funds.

Crowdfunding can be used as a platform for citizens to show off their own ideas and initiatives, and highlight any potential problems in the community. The service can be used for a wide range of applications, from restoring derelict buildings to installing social health programs.

Allowing citizens to show their approval with their personal funding is one way to boost participation, however, there should be other ways to attract attention and allow citizens to voice their opinions too. Maptionnaire is one such way. 

Maptionnaire is an online tool that creates a virtual map of a city, where residents can freely offer their advice, opinions, and feelings about areas of the city or specific projects. Users can simply leave comments that can explicitly inform city officials about their feelings. 

This is a great tool that can provide widely representative data about city plans. The platform can also take votes about certain projects and garner fast results. Since it can be accessed remotely, it also allows for citizens to say what they want, without feeling intimidated by a crowd or swayed by popular opinion.

NURTURING IDEAS

Encouraging public feedback is one way to boost participation, but some local authorities are going a step further by directly asking citizens for solutions. By allowing citizens to formulate their own solutions and give them the tools to realize those solutions, interest in city governance can grow exponentially.

For example, Lublin is the first city in Poland to adopt an initiative called the Green Citizen’s Budget. This participatory budget scheme welcomed residents to put forward ideas to improve urban greenery, and allocated a budget of PLN 2 million (450.000 €) and teamed residents up with technical experts to help realize those plans.

Turning to citizens for inspiration is a popular way of generating new ideas and seeing fresh perspectives. The city of Sydney and the New South Wales government in Australia has recently launched an innovative competition that presents an opportunity for citizens to submit daring proposals to solve public space problems….(More)”.

The New Net Delusion


Geoff Shullenberger at the New Atlantis: “…The old net delusion was naïve but internally consistent. The new net delusion is fragmented and self-contradictory. It vacillates between radical pessimism about the effects of digital platforms and boosterism when new online happenings seem to revive the old cyber-utopian dreams.

One day, democracy is irreversibly poisoned by social media, which empowers the radical right, authoritarians, and racist, misogynist trolls. The next day, the very same platforms are giving rise to a thrilling resurgence of grassroots activism. The new net delusion more closely resembles a psychotic delusion in the clinical meaning of the word, in which the sufferer often swings between megalomaniacal fantasies of control and panicked sensations of loss of control.

The shift toward a subtle endorsement of manipulation and propaganda — itself an expression of a desire for control — is a result of the fracture of our information ecosystem. The earlier cyber-utopian consensus overrated the value of information in itself and underrated the importance of narratives that bestow meaning on information. The openness of the media system to an endless stream of new users, channels, and data has overwhelmed shared stable narratives, bringing about what L. M. Sacasas calls “narrative collapse.”

But sustaining ideological projects and achieving political ends still requires narratives to extract some meaning from the noise. In the oversaturated attention economy, the most extreme narratives generally stand out. As a result, open networks, which were supposed to counteract propaganda, have instead caused its proliferation — sometimes top-down and state-directed, sometimes crowdsourced, often both.

This helps to explain why the democratization of information channels has been less inimical to authoritarian governments than was anticipated ten years ago. Much like extremists and conspiracy theorists, states with aggressive propaganda arms offer oversimplified messages to keep bewildered online users from having to navigate a swelling tide of data on their own.

Conversely, legacy media, if it remains committed to some degree of neutrality, offers fewer definitive explanatory frameworks, and its messages are accordingly more likely to get lost in the noise. It should not surprise us that news organizations are actually pivoting toward more overt ideological commitments. Adopting forceful narratives, however well they actually make sense of the world, attracts more eyeballs.

Those who celebrated Twitter and Facebook as vehicles of global liberalization and those who now denounce them as gateways into dangerous extremism (often the same people) have erred in seeing the platforms as causally linked to specific politics, rather than to a particular range of styles of politics. Their deeper mistake, however, is to view freedom and control as opposed, rather than as complementary elements of a system. The expansion of freedom through open networks generates informational chaos that, in turn, feeds a demand for reinvigorated control. We can see the demand for control in the new appeal of extreme, even bizarre views that impose an organizing principle on the chaos.

And we can also see the demand for control in the nostalgia for the old gatekeepers, whose demise was once celebrated. Ironically, the only way for these gatekeepers to stay relevant may be to follow the lead of the authoritarians and activists — to abandon any stance of being neutral and above the fray and instead furnish a cohering narrative of their own….(More)”.

When Do We Trust AI’s Recommendations More Than People’s?


Chiara Longoni and Luca Cian at Harvard Business School: “More and more companies are leveraging technological advances in machine learning, natural language processing, and other forms of artificial intelligence to provide relevant and instant recommendations to consumers. From Amazon to Netflix to REX Real Estate, firms are using AI recommenders to enhance the customer experience. AI recommenders are also increasingly used in the public sector to guide people to essential services. For example, the New York City Department of Social Services uses AI to give citizens recommendations on disability benefits, food assistance, and health insurance.

However, simply offering AI assistance won’t necessarily lead to more successful transactions. In fact, there are cases when AI’s suggestions and recommendations are helpful and cases when they might be detrimental. When do consumers trust the word of a machine, and when do they resist it? Our research suggests that the key factor is whether consumers are focused on the functional and practical aspects of a product (its utilitarian value) or focused on the experiential and sensory aspects of a product (its hedonic value).

In an article in the Journal of Marketing — based on data from over 3,000 people who took part in 10 experiments — we provide evidence supporting for what we call a word-of-machine effect: the circumstances in which people prefer AI recommenders to human ones.

The word-of-machine effect.

The word-of-machine effect stems from a widespread belief that AI systems are more competent than humans in dispensing advice when utilitarian qualities are desired and are less competent when the hedonic qualities are desired. Importantly, the word-of-machine effect is based on a lay belief that does not necessarily correspond to the reality. The fact of the matter is humans are not necessarily less competent than AI at assessing and evaluating utilitarian attributes. Vice versa, AI is not necessarily less competent than humans at assessing and evaluating hedonic attributes….(More)”.

To Fight Polarization, Ask, “How Does That Policy Work?”


Article by Alex Chesterfield and Kate Coombs: “…One reason for this effect, and for the polarizing outcome, is we often overestimate our understanding of how political policies work. In this case, the more omniscient we think we are, the easier it is to ignore alternative facts or ideas. This phenomenon has a name—the illusion of explanatory depth (IOED). Unless explicitly tested, individuals can remain largely unaware of the shallowness of their own understanding of the things they think they understand—such as the mechanics of a bicycle, or how the policy they support or despise will actually work.

Researchers have started to explore what happens to political attitudes when you explicitly test people on how much they actually know about a policy. When people discover that they don’t know as much as they thought they did, something interesting happens: their political attitudes become less extreme….

Some countries and institutions are already using these insights to improve decision-making on divisive topics. Deliberative democracy, which plays out in the form of citizens’ assemblies and juries, where a small group of people (12-24) come together to deliberate on an issue, provide time and information to encourage participants to generate explanations—rather than justifications based on values, hearsay, or feelings—for their positions. Participants also tend to be representative of the general population; research suggests that increasing contact between diverse individuals could also help diminish affective polarization by shrinking the prejudices we form when making assumptions about the “other” that are based on reductive stereotypes, rather than real, complex people.

Outside of juries and citizens assemblies, countries like Ireland have used deliberative democracy to address a range of complex and highly polarized issues including same-sex marriage, access to abortion, and climate change. U.K. politicians from both sides of the aisle have called for a Brexit assembly to try and break the U.K. political deadlock. Will it work? We don’t know yet, and we’d encourage researchers to continue to study this topic. In the meantime, we can each begin by  confronting our own ignorance. Before committing to a position or policy, ask yourself to explain mechanistically how you think it will bring about the intended outcome. Do you really understand it?

Test your own mechanistic reasoning. Pick a topic you feel strongly about: climate change, Brexit, Immigration, gun laws, assisted suicide/legal euthanasia. Instead of justifying why you support a particular position so strongly, try to explain how it might lead to a particular outcome….(More)”

Why Doubt Is Essential to Science


Liv Grjebine at Scientific American: “The confidence people place in science is frequently based not on what it really is, but on what people would like it to be. When I asked students at the beginning of the year how they would define science, many of them replied that it is an objective way of discovering certainties about the world. But science cannot provide certainties. For example, a majority of Americans trust science as long as it does not challenge their existing beliefs. To the question “When science disagrees with the teachings of your religion, which one do you believe?,” 58 percent of North Americans favor religion; 33 percent science; and 6 percent say “it depends.”

But doubt in science is a feature, not a bug. Indeed, the paradox is that science, when properly functioning, questions accepted facts and yields both new knowledge and new questions—not certainty. Doubt does not create trust, nor does it help public understanding. So why should people trust a process that seems to require a troublesome state of uncertainty without always providing solid solutions?


As a historian of science, I would argue that it’s the responsibility of scientists and historians of science to show that the real power of science lies precisely in what is often perceived as its weakness: its drive to question and challenge a hypothesis. Indeed, the scientific approach requires changing our understanding of the natural world whenever new evidence emerges from either experimentation or observation. Scientific findings are hypotheses that encompass the state of knowledge at a given moment. In the long run, many of are challenged and even overturned. Doubt might be troubling, but it impels us towards a better understanding; certainties, as reassuring as they may seem, in fact undermine the scientific process….(More)”.

Science as Scorekeeping



Brendan Foht at New Atlantis: “If there is one thing about the coronavirus pandemic that both sides of the political spectrum seem to agree on, it’s that the science that bears on it should never be “politicized.” From the left, former CDC directors of the Obama and Clinton administrations warn of how the Trump administration has politicized the agency’s science: “The only valid reason to change released guidelines is new information and new science — not politics.” From the right, the Wall Street Journal frets about the scientific journal Nature publishing a politically charged editorial about why China shouldn’t be blamed for the coronavirus: “Political pressure has distorted scientific judgment.” What both sides assume is that political authorities should defer to scientists on important decisions about the pandemic, but only insofar as science itself is somehow kept free from politics.

But politicization, and even polarization, are not always bad for science. There is much about how we can use science to respond to the pandemic that is inescapably political, and that we cannot simply leave to scientists to decide.

There is, however, a real problem with how political institutions in the United States have engaged with science. Too much of the debate over coronavirus science has centered on how bad the disease really is, with the administration downplaying its risks and the opposition insisting on its danger. One side sees the scientists warning of peril as a political obstacle that must be overcome. The other side sees them as authorities to whom we must defer, not as servants of the public who could be directed toward solving the problem. The false choice between these two perspectives on how science relates to politics obscures a wide range of political choices the country faces about how we can make use of our scientific resources in responding to the pandemic….(More)”.

If data is 21st century oil, could foundations be the right owners?


Felix Oldenburg at Alliance: “What are the best investments for a foundation? This important question is one many foundation professionals are revisiting in light of low interest rates, high market volatility, and fears of deep economic trouble ahead. While stories of success certainly exist and are worth learning from, even the notorious lack of data cannot obscure the inconvenient truth that the idea of traditional endowments is in trouble.

I would argue that in order to unleash the potential of foundations, we should turn the question around, perhaps back on its feet: For which assets are foundations the best owners?

In the still dawning digital age, one fascinating answer may stare you right in the face as you read this. How much is your personal data worth? Your social media information, search and purchase history, they are the source of much of the market value of the fastest growing sector of our time. A rough estimate of market valuation of the major social platforms divided by their active users arrives at more than $1,000 USD per user, not differentiating by location or other factors. This sum is more than the median per capita wealth in about half the world’s countries. And if the trend continues, this value may continue to grow – and with it the big question of how to put one of the most valuable resource of our time to use for the good of all.

Acting as guardians of digital commons, data-endowed foundations could negotiate conditions for the commercial use of its assets, and invest the income to create equal digital opportunities, power 21st century education, and fight climate change.

Foundation ownership in the data sector may sound like a wild idea at first. Yet foundations and their predecessors have played the role of purpose-driven owners of critical assets and infrastructures throughout history. Monasteries (called ‘Stifte’ in German, the root of the German word for foundations) have protected knowledge and education in libraries, and secured health care in hospitals. Trusts have created affordable much of the social housing in the exploding cities of the 19th century. The German Marshall Plan created an endowment for economic recovery that is still in existence today.

The proposition is simple: Independent ownership for the good of all, beyond the commercial or national interests of individual corporations of governments, in perpetuity. Acting as guardians of digital commons, data-endowed foundations could negotiate conditions for the commercial use of its assets, and invest the income to create equal digital opportunities, power 21st century education, and fight climate change. An ideal model of ownership would also include a form of governance exercised by the users themselves through digital participation and elections. A foundation really only relies on one thing, a stable frame of rights in its legal home country. This is far from a trivial condition, but again history shows how many foundations have survived depressions, wars, and revolutions….(More)”

COVID-19 Is Challenging Medical and Scientific Publishing


Article by By Vilas Dhar, Amy Brand & Stefano Bertozzi: “We need a transformation in how early data is shared. But the urgent need for peer-reviewed science, coupled with the potential harms of unreviewed publication, has set the stage for a public discussion on the future of academic publishing. It’s clear that we need rapid, transparent peer review that allows reviewers, authors, and readers to engage with one another, and for dynamic use of technology to accelerate publishing timelines without reducing academic rigor or researcher accountability. However, the field of academic publishing will need significant financial support to catalyze these changes.

Philanthropic organizations, as longtime supporters of scientific research, must be at the vanguard of the effort to fund improvements in how science is curated, reviewed, and published. When the MIT Press first began to address the need for the rapid dissemination of COVID-19-related research and scholarship—by making a selection relevant e-books and journal articles freely available, as well as developing a new, rapid publication model for books, under the imprint First Reads—senior staff were interested in undertaking bolder efforts to address the specific problems engendered by the pandemic. The proliferation of preprints related to COVID-19 was already apparent, as was the danger of un-vetted science seeding mainstream media stories with deleterious results.

Rapid Reviews: COVID-19 (RR:C19) is an innovation in open publishing that allows for rigorous, transparent peer review that is publicly shared in advance of publication. We believe that pushing the peer review process further upstream—so that it occurs at the preprint stage—will benefit a wide variety of stakeholders: journalists, clinicians, researchers, and the public at large.  …

With this and future efforts, we’ve identified five key opportunities to align academic publishing priorities with the public good:

  1. Transparency: Redesign and incentivize the peer review process to publish all peer reviews alongside primary research, reducing duplicate reviews and allowing readers and authors to understand and engage with the critiques.
  2. Accountability: The roles of various authors on any given manuscript should be clearly defined and presented for the readers. When datasets are used, one or more of the authors should have explicit responsibility for verifying the integrity of the data and should document that verification process within the paper’s methodology section.
  3. Urgency: Scientific research can be slow moving and time consuming. Publishing data does not have to be. Publishing houses should build networks of experts who are able to dedicate time to scrutinizing papers in a timely manner with the goal of rapid review with rigor.
  4. Digital-First Publishing: While science is a dynamic process of continued learning and exploration, much of scientific publishing conforms to outdated print models. Academic journals should explore opportunities to deploy AI-powered tools to identify peer-reviewers or preprint scholarship and digital publishing platforms to enable more visible communication and collaboration about research findings. Not only can reviews be closer to real-time, but authors can easily respond and modify their work for continuous quality improvement.
  5. Funding: Pioneering new solutions in academic publishing will require significant trial and error, at a time when traditional business models such as library subscriptions are in decline. Philanthropies should step forward to provide catalytic risk financing, testing new models and driving social good outcomes….(More)”.