More Than Nudges Are Needed to End the Pandemic


Richard Thaler in the New York Times: “…In the case of Covid vaccinations, society cannot afford to wait decades. Although vaccines are readily available and free for everyone over age 12 in the United States, there are many holdouts. About 40 percent of the adult population has not been fully vaccinated, and about a third has not yet gotten even one dose. It is time to get serious.

Of course, information campaigns must continue to stress the safety and efficacy of the vaccines, but it is important to target the messages at the most hesitant groups. It would help if the F.D.A. gave the vaccines its full approval rather than the current emergency use designation. Full approval for the Pfizer drug may come as soon as Labor Day, but the process for the other vaccines is much further behind.

One way to increase vaccine takeup would be to offer monetary incentives. For example, President Biden has recently advocated paying people $100 to get their shots.

Although this policy is well intended, I believe it is a mistake for a state or a country to offer to pay individuals to get vaccinated. First of all, the amount might be taken to be an indicator of how much — or little — the government thinks getting a jab is worth. Surely the value to society of increased vaccinations is well beyond $100 per person.

Second, it seems increasingly likely that one or more booster shots may be necessary for some populations in the United States to deal with the Delta variant of the coronavirus — and, perhaps, other variants as well. If that happens, we don’t want some people to procrastinate, hoping to get paid. Government-sponsored booster shots are already beginning in Israel and are at various stages of planning in several European countries.

An alternative model is being offered by the National Football League, which has stopped short of requiring players to be vaccinated but is offering plenty of incentives. Unvaccinated players have to be tested every day, must be masked and at a distance from teammates on flights, and must stay in their room until game day. Vaccinated players who test positive and are asymptomatic can return to duty after two negative tests 24 hours apart. But unvaccinated players must undergo a 10-day isolation period.

These incentives followed a long effort to educate the players about the benefits to themselves, their families and fellow players. It is hard to say which aspect of the N.F.L. plan is doing the work, but over 90 percent of the league’s players have received at least one jab. The fact that a team could lose a game because an unvaccinated player can’t play creates a powerful group dynamic…(More)”.

The A, B and C of Democracy: Or Cats in the Sack


Book by Luca Belgiorno-Nettis and Kyle Redman: “This is a learner’s guide to a better democracy. Sounds ambitious? It is. The catalyst for publishing this book is obvious. There’s no need to regurgitate the public’s disaffection with politics. Mired in the tawdry mechanics of political campaigning, and incapable of climbing out of cyclical electioneering contests, representative democracies are stuck in a rut.

As Dawn Nakagawa, Vice President of the Berggruen Institute, writes, ‘Democratic reform is hard. We are very attached to our constitutions and institutions, even to the point of romanticising it all.’

This handbook is an introduction to minipublics – otherwise known as citizens’ juries or assemblies – interspersed with a few travel anecdotes to share the momentum behind the basic methodology of deliberative democracy.

As the world accelerates into its digital future, with new modes of working, connecting and living – our parliaments remain relics from a primordial, ideological and adversarial age. Meanwhile urgent challenges are stumbling to half-solutions in slow-motion. Collaboration amongst us humans in the Anthropocene is no longer just a nice-to-have….(More)”.

Crowdsourced Sensor Data Powers Smoke Map


OpenGov: “The Environmental Protection Agency and the U.S. Forest Service (USFS) have released updates to the AirNow Fire and Smoke Map to help protect communities from the effects of wildfire smoke. Started as a pilot project last year, the map pulls data from three sources: temporary monitors such as those the Forest Service and other agencies have deployed near fires; crowdsourced data from nearly 10,000 low-cost sensors nationwide that measure fine particle pollution, the major harmful pollutant in smoke; and monitors that regularly report to AirNow, EPA’s one-stop source for air quality data.

The agencies announced improvements to the map, including a dashboard that gives users quick access to information that can help them plan their outdoor activities, the current Air Quality Index (AQI) category at the monitor or sensor location, data showing whether air quality is improving or worsening, and information about actions to consider taking based on the AQI.

EPA and USFS developed the map-pilot to provide information on fire locations, smoke plumes and air quality in one place. It had more than 7.4 million views in its first three months. The map imports data from almost 10,000 sensors from an air quality sensor network that crowdsources data on particle pollution, providing real-time measurement of air quality on a public map. This was a logical addition to two other projects already underway.

The extra data points the sensors provided proved useful in characterising air quality during the 2020 fire season, and we had positive reception from state, local and tribal air agency partners, and from the public. The map is intended for individuals to use in making decisions about outdoor activities based on air quality, but the unique fire, smoke and concentration data can help increase awareness of the significant impacts of wildfires across all levels of government — federal, state, local and tribal — and provide a valuable tool to assist agencies as they work to protect public health from wildfire smoke during these emergencies….(More)”.

The Time Tax


Article by Annie Lowrey: “…In my decade-plus of social-policy reporting, I have mostly understood these stories as facts of life. Government programs exist. People have to navigate those programs. That is how it goes. But at some point, I started thinking about these kinds of administrative burdens as the “time tax”—a levy of paperwork, aggravation, and mental effort imposed on citizens in exchange for benefits that putatively exist to help them. This time tax is a public-policy cancer, mediating every American’s relationship with the government and wasting countless precious hours of people’s time.

The issue is not that modern life comes with paperwork hassles. The issue is that American benefit programs are, as a whole, difficult and sometimes impossible for everyday citizens to use. Our public policy is crafted from red tape, entangling millions of people who are struggling to find a job, failing to feed their kids, sliding into poverty, or managing a disabling health condition.

… the government needs to simplify. For safety-net programs, this means eliminating asset tests, work requirements, interviews, and other hassles. It means federalizing programs like unemployment insurance and Medicaid. It means cross-coordinating, so that applicants are automatically approved for everything for which they qualify.

Finally, it needs to take responsibility for the time tax. Congress needs to pump money into the civil service and into user-friendly, citizen-centered programmatic design. And the federal government needs to reward states and the executive agencies for increasing uptake and participation rates, while punishing them for long wait times and other bureaucratic snafus.

Such changes would eliminate poverty and encourage trust in government. They would make American lives easier and simpler. Yes, Washington should give Americans more money and more security. But most of all, it should give them back their time….(More)”.

….

A comprehensive study of technological change


Article by Scott Murray: The societal impacts of technological change can be seen in many domains, from messenger RNA vaccines and automation to drones and climate change. The pace of that technological change can affect its impact, and how quickly a technology improves in performance can be an indicator of its future importance. For decision-makers like investors, entrepreneurs, and policymakers, predicting which technologies are fast improving (and which are overhyped) can mean the difference between success and failure.

New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential.

“The rate of improvement can only be empirically estimated when substantial performance measurements are made over long time periods,” says Anuraag Singh SM ’20, lead author of the paper. “In some large technological fields, including software and clinical medicine, such measures have rarely, if ever, been made.”

previous MIT study provided empirical measures for 30 technological domains, but the patent sets identified for those technologies cover less than 15 percent of the patents in the U.S. patent system. The major purpose of this new study is to provide predictions of the performance improvement rates for the thousands of domains not accessed by empirical measurement. To accomplish this, the researchers developed a method using a new probability-based algorithm, machine learning, natural language processing, and patent network analytics….(More)”.

Census Data Change to Protect Privacy Rattles Researchers, Minority Groups


Paul Overberg at the Wall Street Journal: A plan to protect the confidentiality of Americans’ responses to the 2020 census by injecting small, calculated distortions into the results is raising concerns that it will erode their usability for research and distribution of state and federal funds.

The Census Bureau is due to release the first major results of the decennial count in mid-August. They will offer the first detailed look at the population and racial makeup of thousands of counties and cities, as well as tribal areas, neighborhoods, school districts and smaller areas that will be used to redraw congressional, legislative and local districts to balance their populations.

The bureau will adjust most of those statistics to prevent someone from recombining them in a way that would disclose information about an individual respondent. Testing by the bureau shows that improvements in data science, computing power and commercial databases make that feasible.

Last week the bureau’s acting director said the plan was a necessary update of older methods to protect confidentiality. Ron Jarmin said the agency searched for alternatives before settling on differential privacy, a systematic approach to add statistical noise to data, something it has done in some fashion for years.

“I’m pretty confident that it’s going to meet users’ expectations,” Mr. Jarmin said at a panel during an online conference of government data users. “We have to deal with the technology as it is and as it evolves.”…(More)”.

An Obsolete Paradigm


Blogpost by Paul Wormelli: “…Our national system of describing the extent of crime in the U.S. is broken beyond repair and deserves to be replaced by a totally new paradigm (system). 

Since 1930, we have relied on the metrics generated by the Uniform Crime Reporting (UCR) Program to describe crime in the U.S., but it simply does not do so, even with its evolution into the National Incident-Based Reporting System (NIBRS). Criminologists have long recognized the limited scope of the UCR summary crime data, leading to the creation of the National Crime Victimization Survey (NCVS) and other supplementary crime data measurement vehicles. However, despite these measures, the United States still has no comprehensive national data on the amount of crime that has occurred. Even after decades of collecting data, the 1968 Presidential Crime Commission report on the Challenge of Crime in a Free Society lamented the absence of sound and complete data on crime in the U.S., and called for the creation of a National Crime Survey (NCS) that eventually led to the creation of the NCVS. Since then, we have slowly attempted to make improvements that will lead to more robust data. Only in 2021 did the FBI end UCR summary-based crime data collection and move to NIBRS crime data collection on a national scale.

Admittedly, the shift to NIBRS will unleash a sea change in how we analyze crime data and use it for decision making. However, it still lacks the completeness of national crime reporting. In the landmark study of the National Academy of Sciences Committee on Statistics (funded by the FBI and the Bureau of Justice Statistics to make recommendations on modernizing crime statistics), the panel members grappled with this reality and called out the absence of national statistics on crime that would fully inform policymaking on this critical subject….(More)”

Household Financial Transaction Data


Paper by Scott R. Baker & Lorenz Kueng: “The growth of the availability and use of detailed household financial transaction microdata has dramatically expanded the ability of researchers to understand both household decision-making as well as aggregate fluctuations across a wide range of fields. This class of transaction data is derived from a myriad of sources including financial institutions, FinTech apps, and payment intermediaries. We review how these detailed data have been utilized in finance and economics research and the benefits they enable beyond more traditional measures of income, spending, and wealth. We discuss the future potential for this flexible class of data in firm-focused research, real-time policy analysis, and macro statistics….(More)”.

The Inevitable Weaponization of App Data Is Here


Joseph Cox at VICE: “…After years of warning from researchers, journalists, and even governments, someone used highly sensitive location data from a smartphone app to track and publicly harass a specific person. In this case, Catholic Substack publication The Pillar said it used location data ultimately tied to Grindr to trace the movements of a priest, and then outed him publicly as potentially gay without his consent. The Washington Post reported on Tuesday that the outing led to his resignation….

The data itself didn’t contain each mobile phone user’s real name, but The Pillar and its partner were able to pinpoint which device belonged to Burill by observing one that appeared at the USCCB staff residence and headquarters, locations of meetings that he was in, as well as his family lake house and an apartment that has him listed as a resident. In other words, they managed to, as experts have long said is easy to do, unmask this specific person and their movements across time from an supposedly anonymous dataset.

A Grindr spokesperson told Motherboard in an emailed statement that “Grindr’s response is aligned with the editorial story published by the Washington Post which describes the original blog post from The Pillar as homophobic and full of unsubstantiated inuendo. The alleged activities listed in that unattributed blog post are infeasible from a technical standpoint and incredibly unlikely to occur. There is absolutely no evidence supporting the allegations of improper data collection or usage related to the Grindr app as purported.”…

“The research from The Pillar aligns to the reality that Grindr has historically treated user data with almost no care or concern, and dozens of potential ad tech vendors could have ingested the data that led to the doxxing,” Zach Edwards, a researcher who has closely followed the supply chain of various sources of data, told Motherboard in an online chat. “No one should be doxxed and outed for adult consenting relationships, but Grindr never treated their own users with the respect they deserve, and the Grindr app has shared user data to dozens of ad tech and analytics vendors for years.”…(More)”.

Foreign Policy by Canadians: a unique national experiment


Blogpost by James Fishkin: “…Foreign Policy by Canadians was a national field experiment (with a control group that was not invited to deliberate, but which answered the same questions before and after.) The participants and the control group matched up almost perfectly before deliberation, but after deliberation, the participants had reached their considered judgments (while the control group had hardly changed at all). YouGov recruited and surveyed an excellent sample of deliberators, nationally representative in demographics and attitudes (as judged by comparison to the control groups). The project was an attempt to use social science to give an informed and representative input to policy. It was particularly challenging in that foreign policy is an area where most of the public is less engaged and informed even than it is on domestic issues (outside of times of war or severe international crises). Hence, we would argue that Deliberative Polling is particularly appropriate as a form of public input on these topics.

This project was also distinctive in some other ways. First, all the small group discussions by the 444 nationally representative deliberators were conducted via our new video based automated moderator platform. Developed here at Stanford with Professor Ashish Goel and “Crowdsourced Democracy Team” in Management Science and Engineering, it facilitates many small groups of ten or so to self-moderate their discussions. It controls access to the queue for the microphone (limiting each contribution to 45 seconds), it orchestrates the discussion to move from one policy proposal to the next on the list, it periodically asks the participants if they have covered both the arguments in favor and against the proposal, it intervenes if people are being uncivil (a rare occurrence in these dialogues) and it guides the group into formulating its questions for the plenary session experts. This was only the second national application of the online platform (the first was in Chile this past year) and it was the first as a controlled experiment.

A second distinctive aspect of Foreign Policy by Canadians is that the agenda was formulated in both a top-down and a bottom-up manner. While a distinguished advisory group offered input on what topics were worth exploring and on the balance and accuracy of the materials, those materials were also vetted by chapters of the Canadian International Council in different parts of the country. Those meetings deliberated about how the draft materials could be improved. What was left out? Were the most important arguments on either side presented? The meetings of CIC chapters agreed on recommendations for revision and those recommendations were reflected in the final documents and proposals for discussion. I think this is “deliberative crowdsourcing” because the groups had to agree on their most important recommendations based on shared discussion. These meetings were also conducted with our automated deliberation platform….(More)”.