Public Value Science


Barry Bozeman in Issues in Science and Technology: “Why should the United States government support science? That question was apparently settled 75 years ago by Vannevar Bush in Science, the Endless Frontier: “Since health, well-being, and security are proper concerns of Government, scientific progress is, and must be, of vital interest to Government. Without scientific progress the national health would deteriorate; without scientific progress we could not hope for improvement in our standard of living or for an increased number of jobs for our citizens; and without scientific progress we could not have maintained our liberties against tyranny.”

Having dispensed with the question of why, all that remained was for policy-makers to decide, how much? Even at the dawn of modern science policy, costs and funding needs were at the center of deliberations. Though rarely discussed anymore, Endless Frontier did give specific attention to the question of how much. The proposed amounts seem, by today’s standards, modest: “It is estimated that an adequate program for Federal support of basic research in the colleges, universities, and research institutes and for financing important applied research in the public interest, will cost about 10 million dollars at the outset and may rise to about 50 million dollars annually when fully underway at the end of perhaps 5 years.”

In today’s dollars, $50 million translates to about $535 million, or less than 2% of what the federal government actually spent for basic research in 2018. One way to look at the legacy of Endless Frontier is that by answering the why question so convincingly, it logically followed that the how much question could always be answered simply by “more.”

In practice, however, the why question continues to seem so self-evident because it fails to consider a third question, who? As in, who benefits from this massive federal investment in research, and who does not? The question of who was also seemingly answered by Endless Frontier, which not only offered full employment as a major goal for expanded research but also embraced “the sound democratic principle that there should be no favored classes or special privilege.”

But I argue that this principle has now been soundly falsified. In an economic environment characterized by growth but also by extreme inequality, science and technology not only reinforce inequality but also, in some instances, help widen the gap. Science and technology can be a regressivefactor in the economy. Thus, it is time to rethink the economic equation justifying government support for science not just in terms of why and how much, but also in terms of who.

What logic supports my claim that under conditions of conspicuous inequality, science and technology research is often a regressive force? Simple: except in the case of the most basic of basic research (such as exploration of other galaxies), effects are never randomly distributed. Both the direct and indirect effects of science and technology tend to differentially affect citizens according to their socioeconomic power and purchasing power….(More)”.

Intentional and Unintentional Sludge


Essay by Crawford Hollingworth and Liz Barker: “…Both of these stories are illustrations of what many mums and gymgoers may have experienced across the United Kingdom and United States as they tried to cope with the pandemic. We, along with other behavioral scientists, would label both as sludge—when users face high levels of friction obstructing their efforts to achieve something that is in their best interest, or are misled or encouraged to take action that is not in their best interest.

We can think of what the English mum goes through as unintentional sludge—friction due to factors like rushed design, poor infrastructure, and inadequate oversight. The mother is trying to access a benefit that will help her and which she has a right to claim, and which the government genuinely wants her to access. Yet multiple barriers prevented her from accessing the voucher that would help feed her children. Millions of parents found themselves in this situation as schools closed in England earlier this year. All over the country schools ended up paying for food parcels and gift vouchers out of their own budgets to help families who were going hungry.

What the New York gym-goer faces is different. It is intentional sludge—friction put in place knowingly to benefit an organization at the expense of the user. The gym doesn’t want him to cancel the membership, which would mean lost revenue. Even absent the pandemic, the policy would be considered unnecessarily difficult to cancel. The gym’s hope is that people forget, give up, or don’t bother canceling in person or over the phone, or that it takes them longer to do so. This translates into revenue for them, without any of the costs of providing a service. Stories like this have resulted in class-action lawsuits against companies that make it overly difficult or impossible to cancel gym memberships. One lawsuit alleged that one large gym company was stealing over $30 million per month from customers….(More)”.

Homo informaticus


Essay by Luc de Brabandere: “The history of computer science did not begin eighty years ago with the creation of the first electronic computer. To program a computer to process information – or in other words, to simulate thought – we need to be able to understand, dismantle and disassemble thoughts. In IT-speak, in order to encrypt a thought, we must first be able to decrypt it! And this willingness to analyse thought already existed in ancient times. So the principles, laws, and concepts that underlie computer science today originated in an era when the principles of mathematics and logic each started on their own paths, around their respective iconic thinkers, such as Plato and Aristotle. Indeed, the history of computer science could be described as fulfilling the dream of bringing mathematics and logic together. This dream was highlighted for the first time during the thirteenth century by Raymond Lulle, a theologian and missionary from Majorca, but it became the dream of Gottfried Leibniz in particular. This German philosopher wondered why these two fields had evolved side by side separately since ancient times, when both seemed to strive for the same goal. Mathematicians and logicians both wish to establish undeniable truths by fighting against errors of reasoning and implementing precise laws of correct thinking. The Hungarian journalist, essayist and Nobel laureate Arthur Koestler called this shock (because it always is a shock) of an original pairing of two apparently very separate things a bisociation.

We know today that the true and the demonstrable will always remain distinct, so to that extent, logic and mathematics will always remain fundamentally irreconcilable. In this sense Leibniz’s dream will never come true. But three other bisociations, admittedly less ambitious, have proved to be very fruitful, and they structure this short history. Famous Frenchman René Descartes reconciled algebra and geometry; the British logician George Boole brought algebra and logic together; and an American engineer from MIT, Claude Shannon, bisociated binary calculation with electronic relays.

Presented as such, the history of computer science resembles an unexpected remake of Four Weddings and a Funeral! Let’s take a closer look….(More)”.

Data Disappeared


Essay by Samanth Subramanian: “Whenever President Donald Trump is questioned about why the United States has nearly three times more coronavirus cases than the entire European Union, or why hundreds of Americans are still dying every day, he whips out one standard comment. We find so many cases, he contends, because we test so many people. The remark typifies Trump’s deep distrust of data: his wariness of what it will reveal, and his eagerness to distort it. In April, when he refused to allow coronavirus-stricken passengers off the Grand Princess cruise liner and onto American soil for medical treatment, he explained: “I like the numbers where they are. I don’t need to have the numbers double because of one ship.” Unable—or unwilling—to fix the problem, Trump’s instinct is to fix the numbers instead.

The administration has failed on so many different fronts in its handling of the coronavirus, creating the overall impression of sheer mayhem. But there is a common thread that runs through these government malfunctions. Precise, transparent data is crucial in the fight against a pandemic—yet through a combination of ineptness and active manipulation, the government has depleted and corrupted the key statistics that public health officials rely on to protect us.

In mid-July, just when the U.S. was breaking and rebreaking its own records for daily counts of new coronavirus cases, the Centers for Disease Control and Prevention found itself abruptly relieved of its customary duty of collating national numbers on COVID-19 patients. Instead, the Department of Health and Human Services instructed hospitals to funnel their information to the government via TeleTracking, a small Tennessee firm started by a real estate entrepreneur who has frequently donated to the Republican Party. For a while, past data disappeared from the CDC’s website entirely, and although it reappeared after an outcry, it was never updated thereafter. The TeleTracking system was riddled with errors, and the newest statistics sometimes appeared after delays. This has severely limited the ability of public health officials to determine where new clusters of COVID-19 are blooming, to notice demographic patterns in the spread of the disease, or to allocate ICU beds to those who need them most.

To make matters more confusing still, Jared Kushner moved to start a separate coronavirus surveillance system run out of the White House and built by health technology giants—burdening already-overwhelmed officials and health care experts with a needless stream of queries. Kushner’s assessments often contradicted those of agencies working on the ground. When Andrew Cuomo, New York’s governor, asked for 30,000 ventilators, Kushner claimed the state didn’t need them: “I’m doing my own projections, and I’ve gotten a lot smarter about this.”…(More)”.

Consumer Bureau To Decide Who Owns Your Financial Data


Article by Jillian S. Ambroz: “A federal agency is gearing up to make wide-ranging policy changes on consumers’ access to their financial data.

The Consumer Financial Protection Bureau (CFPB) is looking to implement the area of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act pertaining to a consumer’s rights to his or her own financial data. It is detailed in section 1033.

The agency has been laying the groundwork on this move for years, from requesting information in 2016 from financial institutions to hosting a symposium earlier this year on the problems of screen scraping, a risky but common method of collecting consumer data.

Now the agency, which was established by the Dodd-Frank Act, is asking for comments on this critical and controversial topic ahead of the proposed rulemaking. Unlike other regulations that affect single industries, this could be all-encompassing because the consumer data rule touches almost every market the agency covers, according to the story in American Banker.

The Trump administration all but ‘systematically neutered’ the agency.

With the ruling, the agency seeks to clarify its compliance expectations and help establish market practices to ensure consumers have access to consumer financial data. The agency sees an opportunity here to help shape this evolving area of financial technology, or fintech, recognizing both the opportunities and the risks to consumers as more fintechs become enmeshed with their data and day-to-day lives.

Its goal is “to better effectuate consumer access to financial records,” as stated in the regulatory filing….(More)”.

Covid-19 Data Is a Mess. We Need a Way to Make Sense of It.


Beth Blauer and Jennifer Nuzzo in the New York Times: “The United States is more than eight months into the pandemic and people are back waiting in long lines to be tested as coronavirus infections surge again. And yet there is still no federal standard to ensure testing results are being uniformly reported. Without uniform results, it is impossible to track cases accurately or respond effectively.

We test to identify coronavirus infections in communities. We can tell if we are casting a wide enough net by looking at test positivity — the percentage of people whose results are positive for the virus. The metric tells us whether we are testing enough or if the transmission of the virus is outpacing our efforts to slow it.

If the percentage of tests coming back positive is low, it gives us more confidence that we are not missing a lot of infections. It can also tell us whether a recent surge in cases may be a result of increased testing, as President Trump has asserted, or that cases are rising faster than the rate at which communities are able to test.

But to interpret these results properly, we need a national standard for how these results are reported publicly by each state. And although the Centers for Disease Control and Prevention issue protocols for how to report new cases and deaths, there is no uniform guideline for states to report testing results, which would tell us about the universe of people tested so we know we are doing enough testing to track the disease. (Even the C.D.C. was found in May to be reporting states’ results in a way that presented a misleading picture of the pandemic.)

Without a standard, states are deciding how to calculate positivity rates on their own — and their approaches are very different.

Some states include results from positive antigen-based tests, some states don’t. Some report the number of people tested, while others report only the number of tests administered, which can skew the overall results when people are tested repeatedly (as, say, at colleges and nursing homes)….(More)”

tl;dr: this AI sums up research papers in a sentence


Jeffrey M. Perkel & Richard Van Noorden at Nature: “The creators of a scientific search engine have unveiled software that automatically generates one-sentence summaries of research papers, which they say could help scientists to skim-read papers faster.

The free tool, which creates what the team calls TLDRs (the common Internet acronym for ‘Too long, didn’t read’), was activated this week for search results at Semantic Scholar, a search engine created by the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington. For the moment, the software generates sentences only for the ten million computer-science papers covered by Semantic Scholar, but papers from other disciplines should be getting summaries in the next month or so, once the software has been fine-tuned, says Dan Weld, who manages the Semantic Scholar group at AI2…

Weld was inspired to create the TLDR software in part by the snappy sentences his colleagues share on Twitter to flag up articles. Like other language-generation software, the tool uses deep neural networks trained on vast amounts of text. The team included tens of thousands of research papers matched to their titles, so that the network could learn to generate concise sentences. The researchers then fine-tuned the software to summarize content by training it on a new data set of a few thousand computer-science papers with matching summaries, some written by the papers’ authors and some by a class of undergraduate students. The team has gathered training examples to improve the software’s performance in 16 other fields, with biomedicine likely to come first.

The TLDR software is not the only scientific summarizing tool: since 2018, the website Paper Digest has offered summaries of papers, but it seems to extract key sentences from text, rather than generate new ones, Weld notes. TLDR can generate a sentence from a paper’s abstract, introduction and conclusion. Its summaries tend to be built from key phrases in the article’s text, so are aimed squarely at experts who already understand a paper’s jargon. But Weld says the team is working on generating summaries for non-expert audiences….(More)”.

‘It gave me hope in democracy’: how French citizens are embracing people power


Peter Yeung at The Guardian: “Angela Brito was driving back to her home in the Parisian suburb of Seine-et-Marne one day in September 2019 when the phone rang. The 47-year-old caregiver, accustomed to emergency calls, pulled over in her old Renault Megane to answer. The voice on the other end of the line informed her she had been randomly selected to take part in a French citizens’ convention on climate. Would she, the caller asked, be interested?

“I thought it was a real prank,” says Brito, a single mother of four who was born in the south of Portugal. “I’d never heard anything about it before. But I said yes, without asking any details. I didn’t believe it.’”

Brito received a letter confirming her participation but she still didn’t really take it seriously. On 4 October, the official launch day, she got up at 7am as usual and, while driving to meet her first patient of the day, heard a radio news item on how 150 ordinary citizens had been randomly chosen for this new climate convention. “I said to myself, ah, maybe it was true,” she recalls.

At the home of her second patient, a good-humoured old man in a wheelchair, the TV news was on. Images of the grand Art Déco-style Palais d’Iéna, home of the citizens’ gathering, filled the screen. “I looked at him and said, ‘I’m supposed to be one of those 150,’” says Brito. “He told me, ‘What are you doing here then? Leave, get out, go there!’”

Brito had two hours to get to the Palais d’Iéna. “I arrived a little late, but I arrived!” she says.

Over the next nine months, Brito would take part in the French citizens’ convention for the climate, touted by Emmanuel Macron as an “unprecedented democratic experiment”, which would bring together 150 people aged 16 upwards, from all over France and all walks of French life – to learn, debate and then propose measures to reduce greenhouse gas emissions by at least 40% by 2030. By the end of the process, Brito and her fellow participants had convinced Macron to pledge an additional €15bn (£13.4bn) to the climate cause and to accept all but three of the group’s 149 recommendations….(More)”.

Facial-recognition research needs an ethical reckoning


Editorial in Nature: “…As Nature reports in a series of Features on facial recognition this week, many in the field are rightly worried about how the technology is being used. They know that their work enables people to be easily identified, and therefore targeted, on an unprecedented scale. Some scientists are analysing the inaccuracies and biases inherent in facial-recognition technology, warning of discrimination, and joining the campaigners calling for stronger regulation, greater transparency, consultation with the communities that are being monitored by cameras — and for use of the technology to be suspended while lawmakers reconsider where and how it should be used. The technology might well have benefits, but these need to be assessed against the risks, which is why it needs to be properly and carefully regulated.Is facial recognition too biased to be let loose?

Responsible studies

Some scientists are urging a rethink of ethics in the field of facial-recognition research, too. They are arguing, for example, that scientists should not be doing certain types of research. Many are angry about academic studies that sought to study the faces of people from vulnerable groups, such as the Uyghur population in China, whom the government has subjected to surveillance and detained on a mass scale.

Others have condemned papers that sought to classify faces by scientifically and ethically dubious measures such as criminality….One problem is that AI guidance tends to consist of principles that aren’t easily translated into practice. Last year, the philosopher Brent Mittelstadt at the University of Oxford, UK, noted that at least 84 AI ethics initiatives had produced high-level principles on both the ethical development and deployment of AI (B. Mittelstadt Nature Mach. Intell. 1, 501–507; 2019). These tended to converge around classical medical-ethics concepts, such as respect for human autonomy, the prevention of harm, fairness and explicability (or transparency). But Mittelstadt pointed out that different cultures disagree fundamentally on what principles such as ‘fairness’ or ‘respect for autonomy’ actually mean in practice. Medicine has internationally agreed norms for preventing harm to patients, and robust accountability mechanisms. AI lacks these, Mittelstadt noted. Specific case studies and worked examples would be much more helpful to prevent ethics guidance becoming little more than window-dressing….(More)”.

Digital Democracy’s Road Ahead


Richard Hughes Gibson at the Hedgehog Review: “In the last decade of the twentieth century, as we’ve seen, Howard Rheingold and William J. Mitchell imagined the Web as an “electronic agora” where netizens would roam freely, mixing business, pleasure, and politics. Al Gore envisioned it as an “information superhighway” system for which any computer could offer an onramp. Our current condition, by contrast, has been likened to shuffling between “walled gardens,” each platform—be it Facebook, Apple, Amazon, or Google—being its own tightly controlled ecosystem. Yet even this metaphor is perhaps too benign. As the cultural critic Alan Jacobs has observed, “they are not gardens; they are walled industrial sites, within which users, for no financial compensation, produce data which the owners of the factories sift and then sell.”

Harvard Business School professor Shoshanna Zuboff has dubbed the business model underlying these factories “surveillance capitalism.” Surveillance capitalism works by collecting information about you (your Internet activity, call history, app usage, your voice, your location, even your fitness level), which creates profiles of what you like, where you go, who you know, and who you are. That shadowy portrait makes a powerful tool for predicting what kinds of products and services you might like to purchase, and other companies are happy to pay for such finely-tuned targeted advertising. (Facebook alone generated $69 billion in ad revenue last year.)

The information-gathering can’t ever stop, however; the business model depends on a steady supply of new user data to inform the next round of predictions. This “extraction imperative,” as Zuboff calls it, is inherently monopolistic, rival companies being both a threat that must be eliminated and a potential gold mine from which more user data can be extracted (see Facebook’s acquisitions of competitors Whatsapp and Instagram). Equally worryingly, the big tech companies have begun moving into other sectors of the economy, as seen, for example, in Google’s quiet entry last year into the medical records business (unbeknownst to the patients and physicians whose data was mined).

There is growing consensus among legal scholars and social scientists that these practices are hazardous to democracy. Commentators worry over the consequences of putting so much wealth in so few hands so quickly (Zuboff calls it a “new Gilded Age”). They note the number of tech executives who’ve gone on to high-ranking government posts and vice versa. They point to the fact that—contrary to Mark Zuckerberg’s 2010 declaration that privacy is no longer a “social norm”—users are indeed worried about privacy. Scholars note, furthermore, that these platforms are not a genuine reflection of public opinion, though they are often treated as such. Social media can operate as echo chambers, only showing you what people like you read, think, do. Paradoxically, they can also become pressure cookers. As is now widely documented, many algorithms reward—and thereby amplify—the most divisive and thus most attention-grabbing content. Keeping us dialed in—whether for the next round of affirmation or outrage—is essential to their success….(More)”.