How Elvis Got Americans to Accept the Polio Vaccine


Hal Hershfield and Ilana Brody at Scientific American: “Campaigns to change behavior thrive on three factors: social influence, social norms and vivid examples…In late 1956, Elvis Presley was on the precipice of global stardom. “Heartbreak Hotel” had reached number one on the charts earlier that year and Love Me Tender, his debut film,would be released in November. In the midst of this trajectory, he was booked as a guest on the most popular TV show at the time, The Ed Sullivan Show. But he wasn’t only there to perform his hits. Before the show started, and in front of the press and Ed Sullivan himself, Presley flashed his swoon-worthy smile, rolled up his sleeves and let a New York state official stick a needle loaded up with the polio vaccine in his arm.

At that point, the polio virus had been ravaging the American landscape for years, and approximately 60,000 children were infected annually. By 1955, hope famously arrived in the form of Jonas Salk’s vaccine. But despite the literally crippling effects of the virus and the promising results of the vaccination, many Americans simply weren’t getting vaccinated. In fact, when Presley appeared on the Sullivan show, immunization levels among American teens were at an abysmal 0.6 percent.

You might think that threats to children’s health and life expectancy would be enough to motivate people to get vaccinated. Yet, convincing people to get a vaccine is a challenging endeavor. Intuitively, it seems like it would be wise to have doctors and other health officials communicate the need to receive the vaccine. Or, failing that, we might just need to give people more information about the effectiveness of the vaccine itself…(More)”.

Scholarly publishing needs regulation


Essay by Jean-Claude Burgelman: “The world of scientific communication has changed significantly over the past 12 months. Understandably, the amazing mobilisation of research and scholarly publishing in an effort to mitigate the effects of Covid-19 and find a vaccine has overshadowed everything else. But two other less-noticed events could also have profound implications for the industry and the researchers who rely on it.

On 10 January 2020, Taylor and Francis announced its acquisition of one of the most innovative small open-access publishers, F1000 Research. A year later, on 5 January 2021, another of the big commercial scholarly publishers, Wiley, paid nearly $300 million for Hindawi, a significant open-access publisher in London.

These acquisitions come alongside rapid change in publishers’ functions and business models. Scientific publishing is no longer only about publishing articles. It’s a knowledge industry—and it’s increasingly clear it needs to be regulated like one.

The two giant incumbents, Springer Nature and Elsevier, are already a long way down the road to open access, and have built up impressive in-house capacity. But Wiley, and Taylor and Francis, had not. That’s why they decided to buy young open-access publishers. Buying up a smaller, innovative competitor is a well-established way for an incumbent in any industry to expand its reach, gain the ability to do new things and reinvent its business model—it’s why Facebook bought WhatsApp and Instagram, for example.

New regulatory approach

To understand why this dynamic demands a new regulatory approach in scientific publishing, we need to set such acquisitions alongside a broader perspective of the business’s transformation into a knowledge industry. 

Monopolies, cartels and oligopolies in any industry are a cause for concern. By reducing competition, they stifle innovation and push up prices. But for science, the implications of such a course are particularly worrying. 

Science is a common good. Its products—and especially its spillovers, the insights and applications that cannot be monopolised—are vital to our knowledge societies. This means that having four companies control the worldwide production of car tyres, as they do, has very different implications to an oligopoly in the distribution of scientific outputs. The latter situation would give the incumbents a tight grip on the supply of knowledge.

Scientific publishing is not yet a monopoly, but Europe at least is witnessing the emergence of an oligopoly, in the shape of Elsevier, Springer Nature, Wiley, and Taylor and Francis. The past year’s acquisitions have left only two significant independent players in open-access publishing—Frontiers and MDPI, both based in Switzerland….(More)”.

Privacy and digital ethics after the pandemic


Carissa Véliz at Nature: “The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks, which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big. Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet1, and Facebook has shadow profiles on netizens even when they have never had an account on the platform2.

Citizens have responded to the countless data abuses in the past few years with what has been described as a ‘techlash’3. Tech companies whose business model is based on surveillance ceased to be perceived as good guys in hoodies who offered services to make our lives better. They were instead data predators jeopardizing, not only users’ privacy and security, but also democracy itself. During lockdown, communication apps became necessary for any and all social interaction beyond our homes. People have had to use online tools to work, get an education, receive medical attention, and enjoy much-needed entertainment. Gratefulness for having technology that allows us to stay in contact during such circumstances has thus watered down the general techlash. Big tech’s stocks have been consistently on the rise during the pandemic, in line with its accumulating power.

As a result of the pandemic, however, any lingering illusion of voluntariness in the use of technology has disappeared. It is not only citizens who rely on big tech to perform their jobs: businesses, universities, health services, and governments need the platforms to carry out their everyday functions. All over the world, governmental and diplomatic meetings are being carried out on platforms such as Zoom and Teams. Since governments do not have full control over the platforms they use, confidentiality is uncertain.

Enhanced power asymmetries have also worsened the vulnerability of ordinary citizens in areas that range from the interaction with government to ordering food online, and almost everything in between. The pandemic has, for example, led to an increase in the surveillance of employees as they work from home4. Students are likewise being subjected to more scrutiny: by their schools and teachers, and above all, by the companies on which they depend5. Surveillance for public health purposes has likewise increased. Privacy losses disempower citizens and often lead to further abuses of power. In the UK, for example, companies collecting data for pubs and restaurants for contact-tracing purposes have sold on that information6.

Such abuses are not isolated events. For the past two decades, we have allowed an unethical business model that depends on the systematic violation of the right to privacy to run amok. As long as we treat personal data as a commodity, there will be a high risk of it being misused — by being stolen in a hack or by being sold to the highest bidder (which often includes nefarious agents)….(More)”.

These crowdsourced maps will show exactly where surveillance cameras are watching


Mark Sullivan at FastCompany: “Amnesty International is producing a map of all the places in New York City where surveillance cameras are scanning residents’ faces.

The project will enlist volunteers to use their smartphones to identify, photograph, and locate government-owned surveillance cameras capable of shooting video that could be matched against people’s faces in a database through AI-powered facial recognition.

The map that will eventually result is meant to give New Yorkers the power of information against an invasive technology the usage of which and purpose is often not fully disclosed to the public. It’s also meant to put pressure on the New York City Council to write and pass a law restricting or banning it. Other U.S. cities, such as Boston, Portland, and San Francisco, have already passed such laws.

Facial recognition technology can be developed by scraping millions of images from social media profiles and driver’s licenses without people’s consent, Amnesty says. Software from companies like Clearview AI can then use computer vision algorithms to match those images against facial images captured by closed-circuit television (CCTV) or other video surveillance cameras and stored in a database.

Starting in May, volunteers will be able to use a software tool to identify all the facial recognition cameras within their view—like at an intersection where numerous cameras can often be found. The tool, which runs on a phone’s browser, lets users place a square around any cameras they see. The software integrates Google Street View and Google Earth to help volunteers label and attach geolocation data to the cameras they spot.

The map is part of a larger campaign called “Ban the Scan” that’s meant to educate people around the world on the civil rights dangers of facial recognition. Research has shown that facial recognition systems aren’t as accurate when it comes to analyzing dark-skinned faces, putting Black people at risk of being misidentified. Even when accurate, the technology exacerbates systemic racism because it is disproportionately used to identify people of color, who are already subject to discrimination by law enforcement officials. The campaign is sponsored by Amnesty in partnership with a number of other tech advocacy, privacy, and civil liberties groups.

In the initial phase of the project, which was announced last Thursday, Amnesty and its partners launched a website that New Yorkers can use to generate public comments on the New York Police Department’s (NYPD’s) use of facial recognition….(More)”.

Consensus or chaos? Pandemic response hinges on trust, experts say


Article by Catherine Cheney: “Trust is a key reason for the wide variance in how countries have fared during the COVID-19 pandemic, determining why some have succeeded in containing the virus while others have failed, according to new research on responses across 23 countries.

The work, supported by Schmidt Futures and the National Science Foundation and carried out by teams at Columbia, Harvard, and Cornell Universities, studied national responses to COVID-19 based on public health, economy, and politics.

It organizes countries into three categories: control, consensus, and chaos. The researchers call the United States the leading example of high levels of polarization, decentralized decision-making, and distrust in expertise leading to policy chaos. The category also includes Brazil, India, Italy, and the United Kingdom.

To prepare for future pandemics, countries must build trust in public health, government institutions, and expert advice, according to a range of speakers at last week’s Futures Forum on Preparedness. Schmidt Futures, which co-hosted the event, announced that it is launching a new challenge to source the best ideas from around the world for developing trust in public health interventions. This request for proposals is likely just the beginning as funders explore how to learn from the pandemic and build trust moving forward….(More)”.

A Premature Eulogy for Privacy


Review Article by Evan Selinger: “Every so often a sound critical thinker and superb writer asks the wrong questions of a wheel-spinning topic like privacy and then draws the wrong conclusions. This is the case with Firmin DeBrabander in his Life After Privacy: Reclaiming Democracy in a Surveillance Society. Professor of Philosophy at the Maryland Institute College of Art, DeBrabander has a gift for clearly expressing complex ideas and explaining why underappreciated moments in the history of ideas have contemporary relevance. In this book, he aims “to understand the prospects and future of democracy without privacy, or very little of it.” That attempt necessarily leads him both to undervalue privacy and make a case for accepting a severely weakened democracy.

To be sure, DeBrabander doesn’t dismiss privacy with any sort of enthusiasm. On the contrary, he loves his privacy, depicting himself as someone who has to block his “beloved” but overly disclosive students on Facebook. “If I had my druthers, my personal data would be sacrosanct,” he writes. But he is convinced privacy is a lost cause. He makes what are essentially six claims about privacy — some of them seemingly obvious and others more startling — to buttress his eulogy.

Prosecuting Privacy

It’s worth listing DeBrabander’s six propositions here before then rebutting or at least complicating their veracity. The first privacy proposition repeats what has become a seeming truism: we’re living in a “confessional culture” that normalizes oversharing. His second has two components, both much discussed in the last several years: companies participating in the “surveillance economy” have an insatiable appetite for our personal information, and consumers don’t fully comprehend just how much value these companies are able to extract from it through data analytics. He emphasizes Charles Duhigg’s much-discussed 2012 New York Times article about Target using predictive analytics on big data to identify pregnant customers and present them with relevant coupons. Consumers, he contends, will continue giving away massive amounts of personal information to make their cars, homes, cities, and even bodies smarter; it’s likely they won’t be any more equipped in the future to assess tradeoffs and determine when they’re being exploited….(More)”.

We need a new era of international data diplomacy


Rohinton P. Medhora at the Financial Times: “From contact-tracing apps to telemedicine, digital health innovations that can help tackle coronavirus have been adopted swiftly during the pandemic. Lagging far behind, however, are any investigations of their reliability and the implications for privacy and human rights.

In the wake of this surge in “techno-solutionism”, the world needs a new era of data diplomacy to catch up.

Big data holds great promise in improving health outcomes. But it requires norms and standards to govern collection, storage and use, for which there is no global consensus. 

The world broadly comprises four data zones — China, the US, the EU and the remainder. The state-centric China zone, where individuals have no control over their personal data, is often portrayed as the poster child of the long-threatened Orwellian society.A woman scans a QR code of a local app to track personal data for the Covid-19 containment in Zouping in east China’s Shandong province © Barcroft Media via Getty Images

Yet the corporation-centric US zone is also disempowering. The “consent” that users provide to companies is meaningless. Most consumers do not read the endless pages of fine print before “agreeing”, while not consenting means opting out of the digital world and is seldom useful.

The EU’s General Data Protection Regulation goes furthest in entrenching the rights of EU citizens to safeguard their privacy and provide a measure of control over personal data.

But it is not without drawbacks. Costs of compliance are high, with small and medium-sized companies facing a disproportionately large bill that strengthens the large companies that the regulation was designed to rein in. There are also varying interpretations of the rules by different national data protection authorities.

The rest of the world does not have the capacity to create meaningful data governance. Governments are either de facto observers of others’ rules or stumble along with a non-regime. One-fifth of countries have no data protection and privacy legislation, according to figures from Unctad, the UN’s trade and development agency.

Global diplomacy is needed to bring some harmony in norms and practices between these four zones, but the task is not easy. Data straddles our prosperity, health, commerce, quality of democracy, security and safety.

A starting point could be a technology charter of principles, such as the Universal Declaration of Human Rights. It may not be fully applied everywhere, but it could serve as a beacon of hope — particularly for citizens in countries with oppressive regimes — and could guide the drafting of national and subnational legislation.

A second focus should be the equitable taxation of multinational digital platforms that use canny accounting practices to cut their tax bill. While the largest share of users — and one that is growing fast — are in populous poorer parts of the world, the value created from their data goes to richer countries.

This imbalance, coupled with widespread use of tax havens by multinational technology companies, is exacerbating government funding gaps already under pressure because of the pandemic.

A third priority is to revisit statistics. Just as the UN System of National Accounts was introduced in the 1950s, today we need a set of universally accepted definitions and practices to categorise data.

That would allow us to measure and understand the nature of the new data-driven economy. National statistical agencies must be strengthened to gather information and to act as stewards of ever greater quantities of personal data.

Finally, just as the financial crisis of 2007-08 led to the creation of the Financial Stability Forum (a global panel of regulators now called the Financial Stability Board), the Covid-19 crisis is an opportunity to galvanise action through a digital stability board….(More)”

Switzerland to Hold Referendum on Covid-19 Lockdown


James Hookway at the Wall Street Journal: “Switzerland’s system of direct democracy will be put to the test again later this year, this time with a referendum on whether to roll back the government’s powers to impose lockdowns and other measures to slow the Covid-19 pandemic.

The landlocked Alpine nation of 8.5 million people is unusual in providing its people a say on important policy moves by offering referendums if enough people sign a petition for a vote. Last year, Swiss voted on increasing the stock of low-cost housing, tax allowances for children and hunting wolves.

The idea is to provide citizens a check on the power of the federal government, and it is a throwback to the fiercely independent patchwork of cantons, or districts, that were meshed in the medieval period.

Now, the country is set for a referendum on whether to remove the government’s legal authority to order lockdowns and other pandemic restrictions after campaigners submitted a petition of some 86,000 signatures this week—higher than the 50,000 required—triggering a nationwide vote to repeal last year’s Covid-19 Act….(More)”.

The pandemic has pushed citizen panels online


Article by Claudia Chwalisz: “…Until 2020, most assemblies took place in person. We know what they require to produce useful recommendations and gain public trust: time (usually many days over many months), access to broad and varied information, facilitated discussion, and transparency. Successful assemblies take on a pressing public issue, secure politicians’ commitment to respond, have mechanisms to ensure independence, and provide facilities such as stipends and childcare, so all can participate. The diversity of people in the room is what delivers the magic of collective intelligence.

However, the pandemic has forced new approaches. Online discussions might be in real time or asynchronous; facilitators and participants might be identifiable or anonymous. My team at the OECD is exploring how virtual deliberation works best. We have noticed a shift: from text-based interactions to video; from an emphasis on openness to one on representativeness; and from individual to group deliberation.

Some argue that online deliberation is less expensive than in-person processes, but the costs are similar when designed to be as democratic as possible. The new wave pays much more attention to inclusivity. For many online citizens’ assemblies this year (for example, in Belgium, Canada and parts of the United Kingdom), participants without equipment were given computers or smartphones, along with training and support to use them. A digital mediator is now essential for any plans to conduct online deliberation inclusively.

Experiments have also started to transcend national borders. Last October, the German Bertelsmann Stiftung, a private foundation for political reform, and the European Commission ran a Citizens’ Dialogue with 100 randomly selected citizens from Denmark, Germany, Ireland, Italy and Lithuania. They spent three days discussing Europe’s democratic, digital and green future. The Global Citizens’ Assembly on Genome Editing will take place in 2021–22, as will the Global Citizens’ Assembly for the United Nations Climate Change Conference.

However, virtual meetings do not replace in-person interactions. Practitioners adapting assemblies to the virtual world warn that online processes could push people into more linear and binary thinking through voting tools, rather than seeking a nuanced understanding of other people’s reasoning and values….(More)”.

The Protein and the Social Worker: How machine learning can help us tackle society’s biggest challenges


Article by Juan Mateos-Garcia: “The potential for machine learning (ML) to address our toughest health, education and sustainability issues remains unfulfilled. What lessons about what to do – and what not to do – can we learn from other sectors where ML has been applied at scale?

Last year, the UK research lab DeepMind announced that its AI system, AlphaFold 2, can predict a protein’s 3D structure with an unprecedented level of accuracy. This breakthrough could enable rapid advances in drug discovery and environmental applications.

Like almost all AI systems today, AlphaFold 2 is based on ML techniques that learn from data to make predictions. These ‘prediction machines’ are at the heart of internet products and services we use every day, from search engines and social networks to personal assistants and online stores. In years to come, ML is expected to transform other sectors including transportation (through self-driving vehicles), biomedical research (through precision medicine) and manufacturing (through robotics).

But what about fields such as healthy living, early years development or sustainability, where our societies face some of their greatest challenges? Predictive ML techniques could also play an important role there – by helping identify pupils at risk of falling behind, or by personalising interventions to encourage healthier behaviours. However, its potential in these areas is still far from being realised….(More)”.