Using FOIA logs to develop news stories


Yilun Cheng at MuckRock: “In the fiscal year 2020, federal agencies received a total of 790,772 Freedom of Information Act (FOIA) requests. There are also tens of thousands of state and local agencies taking in and processing public record requests on a daily basis. Since most agencies keep a log of requests received, FOIA-minded reporters can find interesting story ideas by asking for and digging through the history of what other people are looking to obtain.

Some FOIA logs are posted on the websites of agencies that proactively release these records. Those that are not can be obtained through a FOIA request. There are a number of online resources that collect and store these documents, including MuckRockthe Black VaultGovernment Attic and FOIA Land.

Sorting through a FOIA log can be challenging since format differs from agency to agency. A more well-maintained log might include comprehensive information on the names of the requesters, the records being asked for, the dates of the requests’ receipt and the agency’s responses, as shown, for example, in a log released by the U.S. Department of Health and Human Services Agency.https://www.documentcloud.org/documents/20508483/annotations/2024702

But other departments –– the Cook County Department of Public Health, for instance –– might only send over a three-column spreadsheet with no descriptions of the nature of the requests.https://www.documentcloud.org/documents/20491259/annotations/2024703

As a result, learning how to negotiate with agencies and interpreting the content in their FOIA logs are crucial for journalists trying to understand the public record landscape. While some reporters only use FOIA logs to keep tabs on their competitors’ reporting interests, the potential of these documents goes far beyond this. Below are some tips for getting story inspiration from FOIA logs….(More)”.

What Data Can’t Do


Hannah Fry in The New Yorker: “Tony Blair was usually relaxed and charismatic in front of a crowd. But an encounter with a woman in the audience of a London television studio in April, 2005, left him visibly flustered. Blair, eight years into his tenure as Britain’s Prime Minister, had been on a mission to improve the National Health Service. The N.H.S. is a much loved, much mocked, and much neglected British institution, with all kinds of quirks and inefficiencies. At the time, it was notoriously difficult to get a doctor’s appointment within a reasonable period; ailing people were often told they’d have to wait weeks for the next available opening. Blair’s government, bustling with bright technocrats, decided to address this issue by setting a target: doctors would be given a financial incentive to see patients within forty-eight hours.

It seemed like a sensible plan. But audience members knew of a problem that Blair and his government did not. Live on national television, Diana Church calmly explained to the Prime Minister that her son’s doctor had asked to see him in a week’s time, and yet the clinic had refused to take any appointments more than forty-eight hours in advance. Otherwise, physicians would lose out on bonuses. If Church wanted her son to see the doctor in a week, she would have to wait until the day before, then call at 8 a.m. and stick it out on hold. Before the incentives had been established, doctors couldn’t give appointments soon enough; afterward, they wouldn’t give appointments late enough.

“Is this news to you?” the presenter asked.

“That is news to me,” Blair replied.

“Anybody else had this experience?” the presenter asked, turning to the audience.

Chaos descended. People started shouting, Blair started stammering, and a nation watched its leader come undone over a classic case of counting gone wrong.

Blair and his advisers are far from the first people to fall afoul of their own well-intentioned targets. Whenever you try to force the real world to do something that can be counted, unintended consequences abound. That’s the subject of two new books about data and statistics: “Counting: How We Use Numbers to Decide What Matters” (Liveright), by Deborah Stone, which warns of the risks of relying too heavily on numbers, and “The Data Detective” (Riverhead), by Tim Harford, which shows ways of avoiding the pitfalls of a world driven by data.

Both books come at a time when the phenomenal power of data has never been more evident. The covid-19 pandemic demonstrated just how vulnerable the world can be when you don’t have good statistics, and the Presidential election filled our newspapers with polls and projections, all meant to slake our thirst for insight. In a year of uncertainty, numbers have even come to serve as a source of comfort. Seduced by their seeming precision and objectivity, we can feel betrayed when the numbers fail to capture the unruliness of reality.

The particular mistake that Tony Blair and his policy mavens made is common enough to warrant its own adage: once a useful number becomes a measure of success, it ceases to be a useful number. This is known as Goodhart’s law, and it reminds us that the human world can move once you start to measure it….(More)”.

From Tech Critique to Ways of Living


Alan Jacobs at the New Atlantis: “Neil Postman was right. So what?… In the 1950s and 1960s, a series of thinkers, beginning with Jacques Ellul and Marshall McLuhan, began to describe the anatomy of our technological society. Then, starting in the 1970s, a generation emerged who articulated a detailed critique of that society. The critique produced by these figures I refer to in the singular because it shares core features, if not a common vocabulary. What Ivan Illich, Ursula Franklin, Albert Borgmann, and a few others have said about technology is powerful, incisive, and remarkably coherent. I am going to call the argument they share the Standard Critique of Technology, or SCT. The one problem with the SCT is that it has had no success in reversing, or even slowing, the momentum of our society’s move toward what one of their number, Neil Postman, called technopoly.

The basic argument of the SCT goes like this. We live in a technopoly, a society in which powerful technologies come to dominate the people they are supposed to serve, and reshape us in their image. These technologies, therefore, might be called prescriptive (to use Franklin’s term) or manipulatory (to use Illich’s). For example, social networks promise to forge connections — but they also encourage mob rule. Facial-recognition software helps to identify suspects — and to keep tabs on whole populations. Collectively, these technologies constitute the device paradigm (Borgmann), which in turn produces a culture of compliance (Franklin).

The proper response to this situation is not to shun technology itself, for human beings are intrinsically and necessarily users of tools. Rather, it is to find and use technologies that, instead of manipulating us, serve sound human ends and the focal practices (Borgmann) that embody those ends. A table becomes a center for family life; a musical instrument skillfully played enlivens those around it. Those healthier technologies might be referred to as holistic (Franklin) or convivial (Illich), because they fit within the human lifeworld and enhance our relations with one another. Our task, then, is to discern these tendencies or affordances of our technologies and, on both social and personal levels, choose the holistic, convivial ones.

The Standard Critique of Technology as thus described is cogent and correct. I have referred to it many times and applied it to many different situations. For instance, I have used the logic of the SCT to make a case for rejecting the “walled gardens” of the massive social media companies, and for replacing them with a cultivation of the “digital commons” of the open web.

But the number of people who are even open to following this logic is vanishingly small. For all its cogency, the SCT is utterly powerless to slow our technosocial momentum, much less to alter its direction. Since Postman and the rest made that critique, the social order has rushed ever faster toward a complete and uncritical embrace of the prescriptive, manipulatory technologies deceitfully presented to us as Liberation and Empowerment. So what next?…(More)”.

A Victory for Scientific Pragmatism


Essay by Arturo CasadevallMichael J. Joynerand Nigel Paneth:”…The convalescent plasma controversy highlights the need to better educate physicians on the knowledge problem in medicine: How do we know what we know, and how do we acquire new knowledge? The usual practice guidelines doctors rely on for the treatment of disease were not available for the treatment of Covid-19 early in the pandemic, since these are usually issued by professional societies only after definitive information is available from RCTs, a luxury we did not have. The convalescent plasma experience supports Devorah Goldman’s plea to consider all available information when making therapeutic decisions.

Fortunately, the availability of rapid communication through pre-print studies, social media, and online conferences have allowed physicians to learn quickly. The experience suggests the value of providing more instruction in medical schools, postgraduate education, and continuing medical education on how best to evaluate evidence — especially preliminary and seemingly contradictory evidence. Just as physicians learn to use clinical judgment in treating individual patients, they must learn how to weigh evidence in treating populations of patients. We also need greater nimbleness and more flexibility from regulators and practice-guideline groups in emergency situations such as pandemics. They should issue interim recommendations that synthesize the best available evidence, as the American Association of Blood Bankers has done for plasma, recognizing that these recommendations may change as new evidence accumulates. Similarly, we all need to make greater efforts to educate the public to understand that all knowledge in medicine and science is provisional, subject to change as new and better studies emerge. Updating and revising recommendations as knowledge advances is not a weakness but a foundational strength of good medicine….(More)”.

Hospitals Hide Pricing Data From Search Results


Tom McGintyAnna Wilde Mathews and Melanie Evans at the Wall Street Journal: “Hospitals that have published their previously confidential prices to comply with a new federal rule have also blocked that information from web searches with special coding embedded on their websites, according to a Wall Street Journal examination.

The information must be disclosed under a federal rule aimed at making the $1 trillion sector more consumer friendly. But hundreds of hospitals embedded code in their websites that prevented Alphabet Inc.’s Google and other search engines from displaying pages with the price lists, according to the Journal examination of more than 3,100 sites.

The code keeps pages from appearing in searches, such as those related to a hospital’s name and prices, computer-science experts said. The prices are often accessible other ways, such as through links that can require clicking through multiple layers of pages.

“It’s technically there, but good luck finding it,” said Chirag Shah, an associate professor at the University of Washington who studies human interactions with computers. “It’s one thing not to optimize your site for searchability, it’s another thing to tag it so it can’t be searched. It’s a clear indication of intentionality.”…(More)”.

Negligence, Not Politics, Drives Most Misinformation Sharing


John Timmer at Wired: “…a small international team of researchers… decided to take a look at how a group of US residents decided on which news to share. Their results suggest that some of the standard factors that people point to when explaining the tsunami of misinformation—inability to evaluate information and partisan biases—aren’t having as much influence as most of us think. Instead, a lot of the blame gets directed at people just not paying careful attention.

The researchers ran a number of fairly similar experiments to get at the details of misinformation sharing. This involved panels of US-based participants recruited either through Mechanical Turk or via a survey population that provided a more representative sample of the US. Each panel had several hundred to over 1,000 individuals, and the results were consistent across different experiments, so there was a degree of reproducibility to the data.

To do the experiments, the researchers gathered a set of headlines and lead sentences from news stories that had been shared on social media. The set was evenly mixed between headlines that were clearly true and clearly false, and each of these categories was split again between those headlines that favored Democrats and those that favored Republicans.

One thing that was clear is that people are generally capable of judging the accuracy of the headlines. There was a 56 percentage point gap between how often an accurate headline was rated as true and how often a false headline was. People aren’t perfect—they still got things wrong fairly often—but they’re clearly quite a bit better at this than they’re given credit for.

The second thing is that ideology doesn’t really seem to be a major factor in driving judgements on whether a headline was accurate. People were more likely to rate headlines that agreed with their politics, but the difference here was only 10 percentage points. That’s significant (both societally and statistically), but it’s certainly not a large enough gap to explain the flood of misinformation.

But when the same people were asked about whether they’d share these same stories, politics played a big role, and the truth receded. The difference in intention to share between true and false headlines was only 6 percentage points. Meanwhile the gap between whether a headline agreed with a person’s politics or not saw a 20 percentage point gap. Putting it in concrete terms, the authors look at the false headline “Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests.” Only 16 percent of conservatives in the survey population rated it as true. But over half of them were amenable to sharing it on social media….(More)”.

Mastercard, SoftBank and others call on G7 to create tech group


Siddharth Venkataramakrishnan at the Financial Times: “A group of leading companies including Mastercard, SoftBank and IBM have called on the G7 to create a new body to help co-ordinate how member states tackle issues ranging from artificial intelligence to cyber security.

The Data and Technology Forum, which would be modelled on the Financial Stability Board that was created after the 2008 financial crisis, would provide recommendations on how tech governance can be co-ordinated internationally, rather than proposing firm regulations.

“We believe a similar forum [to the FSB] is urgently needed to prevent fragmentation and strengthen international co-operation and consensus on digital governance issues,” said Michael Froman, vice-chair and president of strategic growth for Mastercard. “There is a window of opportunity — right now — to strengthen collaboration.”

The proposal comes as countries’ approaches to tech policy are becoming increasingly divergent, creating problems of international co-operation, while concerns grow globally over issues such as privacy and data security.

The 25 companies involved come from a broad range of sectors, including payment providers Visa and Nexi, carmakers Toyota and Mercedes and global healthcare company GlaxoSmithKline.

Like the Basel-based FSB, which was set up to identify and address systemic risks in the financial system, the new body would provide a forum for tackling major challenges in the tech sector such as cross-border data transfers and the regulation of artificial intelligence.

Froman said the forum was “essential” to promote trust in new technologies while avoiding diverging industry standards. The body would work with existing organisations such as the World Trade Organization, and professional standard-setting bodies.

Struggles over which government gets to set the rules of the internet of the future have intensified in recent years, with the US, EU and China all seeking to gain first-mover advantage.

The new body’s first three areas of focus would be co-operation on cyber security, the alignment of AI frameworks and the global interoperability of data….(More)”.

How ‘Good’ Social Movements Can Triumph over ‘Bad’ Ones


Essay by Gilda Zwerman and Michael Schwartz: “…How, then, can we judge which movement was the “good” one and which the “bad?”

The answer can be found in the sociological study of social movements. Over decades of focused research, the field has demonstrated that evaluating the moral compass of individual participants does little to advance our understanding of the morality or the actions of a large movement. Only by assessing the goals, tactics and outcomes of movements as collective phenomena can we begin to discern the distinction between “good” and “bad” movements.

Modern social movement theory developed from foundational studies by several generations of scholars, notably W.E.B. DuBoisIda B. WellsC.L.R. JamesE.P. ThompsonEric HobsbawmCharles Tilly and Howard Zinn. Their works analyzing “large” historical processes provided later social scientists with three working propositions.

First, the morality of a movement is measured by the type of change it seeks. “Good” movements are emancipatory: they seek to pressure institutional authorities into reducing systemic inequality, extending democratic rights to previously excluded groups, and alleviating material, social, and political injustices. “Bad” movements tend to be reactionary. They arise in response to good movements and they seek to preserve or intensify the exclusionary structures, laws and policies that the emancipatory movements are challenging.

Second, large-scale institutional changes that broaden freedom or advance the cause of social justice are rarely initiated by institutional authorities or political elites. Rather, most social progress is the result of pressure exerted from the bottom up, by ordinary people who press for reform by engaging in collective and creative disorders outside the bounds of mainstream institutions.

And third, good intentions—aspiring to achieve emancipatory goals—by no means guarantee that a movement will succeed.

The highly popular and emancipatory protests of the 1960s, as well as the influence of groundbreaking works in social history mentioned above, inspired a renaissance in the study of social movements in subsequent decades. Focusing primarily on “good” movements, a new generation of social scientists sought to identify the environmental circumstances, organizational features and strategic choices that increased the likelihood that “good intentions” would translate into tangible change. This research has generated a rich trove of findings:…(More)”.

Coming wave of video games could build empathy on racism, environment and aftermath of war


Mike Snider at USA Today: “Some of the newest video games in development aren’t really games at all, but experiences that seek to build empathy for others.

Among the five such projects getting funding grants and support from 3D software engine maker Unity is “Our America,” in which the player takes the role of a Black man who is driving with his son when their car is pulled over by a police officer.

The father worries about getting his car registration from the glove compartment because the officer “might think it’s a gun or something,” the character says in the trailer.

On the project’s website, the developers describe “Our America” as “an autobiographical VR Experience” in which “the audience must make quick decisions, answer questions – but any wrong move is the difference between life and death.”…

The other Unity for Humanity winners include:

  • Ahi Kā Rangers: An ecological mobile game with development led by Māori creators. 
  • Dot’s Home: A game that explores historical housing injustices faced by Black and brown home buyers. 
  • Future Aleppo: A VR experience for children to rebuild homes and cities destroyed by war. 
  • Samudra: A children’s environmental puzzle game that takes the player across a polluted sea to learn about pollution and plastic waste.

While “Our America” may serve best as a VR experience, other projects such as “Dot’s Home” may be available on mobile devices to expand its accessibility….(More)”.

Who Is Making Sure the A.I. Machines Aren’t Racist?


Cade Metz at the New York Times: “Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google….(More)”.