In this small Va. town, citizens review police like Uber drivers


Article by Emily Davies: “Chris Ford stepped on the gas in his police cruiser and rolled down Gold Cup Drive to catch the SUV pushing 30 mph in a 15 mph zone. Eleven hours and 37 minutes into his shift, the corporal was ready for his first traffic stop of the day.

“Look at him being sneaky,” Fordsaid, his blue lights flashing on a quiet road in this small town where a busy day could mean animals escaped from a local slaughterhouse.

Ford parked, walked toward the SUV and greeted the man who had ignored the speed limit at exactly the wrong time.

“I was doing 15,” said the driver, a Black man in a mostly White neighborhood of a mostly White town.

The officertook his license and registration back to the cruiser.

“Every time I pull over someone of color, they’re standoffish with me. Like, ‘Here’s a White police officer, here we go again.’ ” Ford, 56, said. “So I just try to be nice.”

Ford knew the stop would be scrutinized — and not just by the reporter who was allowed to ride along on his shift.

After every significant encounter with residents, officers in Warrenton are required to hand out a QR code, which is on the back of their business card, asking for feedback on the interaction. Through a series of questions, citizens can use a star-based system to rate officers on their communication, listening skills and fairness. The responses are anonymous and can be completed any time after the interaction to encourage people to give honest assessments. The program, called Guardian Score, is supposed to give power to those stopped by police in a relationship that has historically felt one-sided — and to give police departments a tool to evaluate their force on more than arrests and tickets.

“If we started to measure how officers are treating community members, we realized we could actually infuse this into the overall evaluation process of individual officers,” said Burke Brownfeld, a founder of Guardian Score and a former police officer in Alexandria. “The definition of doing a good job could change. It would also include: How are your listening skills? How fairly are you treating people based on their perception?”…(More)”.

How harmful is social media?


Gideon Lewis-Kraus in The New Yorker: “In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones….

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public. Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”…(More)”.

How science could aid the US quest for environmental justice


Jeff Tollefson at Nature: “…The network of US monitoring stations that detect air pollution catches only broad trends across cities and regions, and isn’t equipped for assessing air quality at the level of streets and neighbourhoods. So environmental scientists are exploring ways to fill the gaps.

In one project funded by NASA, researchers are developing methods to assess street-level pollution using measurements of aerosols and other contaminants from space. When the team trained its tools on Washington DC, the scientists found1 that sections in the city’s southeast, which have a larger share of Black residents, are exposed to much higher levels of fine-soot pollution than wealthier — and whiter — areas in the northwest of the city, primarily because of the presence of major roads and bus depots in the southeast.

Cumulative burden: Air-pollution levels tend to be higher in poorer and predominantly Black neighbourhoods of Washington DC.
Source: Ref. 1

The detailed pollution data painted a more accurate picture of the burden on a community that also lacks access to high-quality medical facilities and has high rates of cardiovascular disorders and other diseases. The results help to explain a more than 15-year difference in life expectancy between predominantly white neighbourhoods and some predominantly Black ones.

The analysis underscores the need to consider pollution and socio-economic data in parallel, says Susan Anenberg, director of the Climate and Health Institute at the George Washington University in Washington DC and co-leader of the project. “We can actually get neighbourhood-scale observations from space, which is quite incredible,” she says, “but if you don’t have the demographic, economic and health data as well, you’re missing a very important piece of the puzzle.”

Other projects, including one from technology company Aclima, in San Francisco, California, are focusing on ubiquitous, low-cost sensors that measure air pollution at the street level. Over the past few years, Aclima has deployed a fleet of vehicles to collect street-level data on air pollutants such as soot and greenhouse gases across 101 municipalities in the San Francisco Bay area. Their data have shown that air-pollution levels can vary as much as 800% from one neighbourhood block to the next.

Working directly with disadvantaged communities and environmental regulators in California, as well as with other states and localities, the company provides pollution monitoring on a subscription basis. It also offers the use of its screening tool, which integrates a suite of socio-economic data and can be used to assess cumulative impacts…(More)”.

I tried to read all my app privacy policies. It was 1 million words.


Article by Geoffrey A. Fowler: “…So here’s an idea: Let’s abolish the notion that we’re supposed to read privacy policies.

I’m not suggesting companies shouldn’t have to explain what they’re up to. Maybe we call them “data disclosures” for the regulators, lawyers, investigative journalists and curious consumers to pore over.

But to protect our privacy, the best place to start is for companies to simply collect less data. “Maybe don’t do things that need a million words of explanation? Do it differently,” said Slaughter. “You can’t abuse, misuse, leverage data that you haven’t collected in the first place.”

Apps and services should only collect the information they really need to provide that service — unless we opt in to let them collect more, and it’s truly an option.

I’m not holding my breath that companies will do that voluntarily, but a federal privacy law would help. While we wait for one, Slaughter said the FTC (where Democratic commissioners recently gained a majority) is thinking about how to use its existing authority “to pursue practices — including data collection, use and misuse — that are unfair to users.”

Second, we need to replace the theater of pressing “agree” with real choices about our privacy.

Today, when we do have choices to make, companies often present them in ways that pressure us into making the worst decisions for ourselves.

Apps and websites should give us the relevant information and our choices in the moment when it matters. Twitter actually does this just-in-time notice better than many other apps and websites: By default, it doesn’t collect your exact location, and only prompts you to do so when you ask to tag your location in a tweet.

Even better, technology could help us manage our choices. Cranor suggests that data disclosures could be coded to be read by machines. Companies already do this for financial information, and the TLDR Act would require consistent tags on privacy information, too. Then your computer could act kind of like a butler, interacting with apps and websites on your behalf.

Picture Siri as a butler who quizzes you briefly about your preferences and then does your bidding. The privacy settings on an iPhone already let you tell all the different apps on your phone not to collect your location. For the past year, they’ve also allowed you to ask apps not to track you.

Web browsers could serve as privacy butlers, too. Mozilla’s Firefox already lets you block certain kinds of privacy invasions. Now a new technology called the Global Privacy Control is emerging that would interact with websites and instruct them not to “sell” our data. It’s grounded in California’s privacy law, which is among the toughest in the nation, though it remains to be seen how the state will enforce GPC…(More)”.

Seeking data sovereignty, a First Nation introduces its own licence


Article by Caitrin Pilkington: “The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation, or LKFN, says it is partnering with the nearby Scotty Creek research facility, outside Fort Simpson, to introduce a new application process for researchers. 

The First Nation, which also plans to create a compendium of all research gathered on its land, says the approach will be the first of its kind in the Northwest Territories.

LKFN says the current NWT-wide licensing system will still stand but a separate system addressing specific concerns was urgently required.

In the wake of a recent review of post-secondary education in the North, changes like this are being positioned as part of a larger shift in perspective about southern research taking place in the territory. 

LKFN’s initiative was approved by its council on February 7. As of April 1, any researcher hoping to study at Scotty Creek and in LKFN territory has been required to fill out a new application form. 

“When we get permits now, we independently review them and make sure certain topics are addressed in the application, so that researchers and students understand not just Scotty Creek, but the people on the land they’re on,” said Dieter Cazon, LKFN’s manager of lands and resources….

Currently, all research licensing goes through the Aurora Research Institute. The ARI’s form covers many of the same areas as the new LKFN form, but the institute has slightly different requirements for researchers.
The ARI application form asks researchers to:

  • share how they plan to release data, to ensure confidentiality;
  • describe their methodology; and
  • indicate which communities they expect to be affected by their work.

The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation form asks researchers to:

  • explicitly declare that all raw data will be co-owned by the Łı́ı́dlı̨ı̨ Kų́ę́ First Nation;
  • disclose the specific equipment and infrastructure they plan to install on the land, lay out their demobilization plan, and note how often they will be travelling through the land for data collection; and
  • explain the steps they’ve taken to educate themselves about Łı́ı́dlı̨ı̨ Kų́ę́ First Nation customs and codes of research practice that will apply to their work with the community.

Cazon says the new approach will work in tandem with ARI’s system…(More)”.

The Future of Open Data: Law, Technology and Media


Book edited by Pamela Robinson, and Teresa Scassa: “The Future of Open Data flows from a multi-year Social Sciences and Humanities Research Council (SSHRC) Partnership Grant project that set out to explore open government geospatial data from an interdisciplinary perspective. Researchers on the grant adopted a critical social science perspective grounded in the imperative that the research should be relevant to government and civil society partners in the field.

This book builds on the knowledge developed during the course of the grant and asks the question, “What is the future of open data?” The contributors’ insights into the future of open data combine observations from five years of research about the Canadian open data community with a critical perspective on what could and should happen as open data efforts evolve.

Each of the chapters in this book addresses different issues and each is grounded in distinct disciplinary or interdisciplinary perspectives. The opening chapter reflects on the origins of open data in Canada and how it has progressed to the present date, taking into account how the Indigenous data sovereignty movement intersects with open data. A series of chapters address some of the pitfalls and opportunities of open data and consider how the changing data context may impact sources of open data, limits on open data, and even liability for open data. Another group of chapters considers new landscapes for open data, including open data in the global South, the data priorities of local governments, and the emerging context for rural open data…(More)”.

Data for an Inclusive Economic Recovery


Report by the National Skills Coalition: “A truly inclusive economic recovery means that the workers and businesses who were most impacted by this pandemic, as well as workers who have been held back by structural barriers of discrimination or lack of opportunity, are empowered to equitably participate in and benefit from the economy’s expansion and restructuring. 

But we need data on how different workers and businesses are faring in the recovery, so 

we can hold policymakers accountable to equitable outcomes. Disparities and inequities in skills training programs can only be eliminated if there is high-quality information on program outcomes available to practitioners and policymakers to assess and address equity gaps. Once we have the data – we can use it to drive the change we need! 

 Data for an Inclusive Economic Recovery provides recommendations on how to measure and report on what really matters to help diminish structural inequities and to shape implementation of federal recovery investments as well as new state and federal workforce investments…  

Recommendations Include: 

  • Requiring that all education and skills training programs include collection of self-reported demographic characteristics of workers and learners so outcomes can be disaggregated by race, ethnicity, gender, English language proficiency, income, and geography ;
  • Ensuring participants of skills training programs know what demographic characteristics are being collected about them, who will have access to personally identifiable information, and how their data will be used; 
  • Establishing common outcomes metrics across federal skills training programs;
  • Expanding outcomes to include those that allow policymakers to assess the quality of skills training programs and measure economic mobility along a career pathway; 
  • Ensuring equitable access to administrative data; 
  • Mandating public reporting on skills training and workforce investment outcomes; and

Providing sufficient funding for linked education and workforce data systems…(More)”.

The Labor Market Impacts of Technological Change: From Unbridled Enthusiasm to Qualified Optimism to Vast Uncertainty


NBER Working Paper by David Autor: “This review considers the evolution of economic thinking on the relationship between digital technology and inequality across four decades, encompassing four related but intellectually distinct paradigms, which I refer to as the education race, the task polarization model, the automation-reinstatement race, and the era of Artificial Intelligence uncertainty. The nuance of economic understanding has improved across these epochs. Yet, traditional economic optimism about the beneficent effects of technology for productivity and welfare has eroded as understanding has advanced. Given this intellectual trajectory, it would be natural to forecast an even darker horizon ahead. I refrain from doing so because forecasting the “consequences” of technological change treats the future as a fate to be divined rather than an expedition to be undertaken. I conclude by discussing opportunities and challenges that we collectively face in shaping this future….(More)”.

Facebook-owner Meta to share more political ad targeting data


Article by Elizabeth Culliford: “Facebook owner Meta Platforms Inc (FB.O) will share more data on targeting choices made by advertisers running political and social-issue ads in its public ad database, it said on Monday.

Meta said it would also include detailed targeting information for these individual ads in its “Facebook Open Research and Transparency” database used by academic researchers, in an expansion of a pilot launched last year.

“Instead of analyzing how an ad was delivered by Facebook, it’s really going and looking at an advertiser strategy for what they were trying to do,” said Jeff King, Meta’s vice president of business integrity, in a phone interview.

The social media giant has faced pressure in recent years to provide transparency around targeted advertising on its platforms, particularly around elections. In 2018, it launched a public ad library, though some researchers criticized it for glitches and a lack of detailed targeting data.Meta said the ad library will soon show a summary of targeting information for social issue, electoral or political ads run by a page….The company has run various programs with external researchers as part of its transparency efforts. Last year, it said a technical error meant flawed data had been provided to academics in its “Social Science One” project…(More)”.

Social Engineering: How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication


Open Access book by Robert W. Gehl, and Sean T Lawson: “Manipulative communication—from early twentieth-century propaganda to today’s online con artistry—examined through the lens of social engineering. The United States is awash in manipulated information about everything from election results to the effectiveness of medical treatments. Corporate social media is an especially good channel for manipulative communication, with Facebook a particularly willing vehicle for it. In Social Engineering, Robert Gehl and Sean Lawson show that online misinformation has its roots in earlier techniques: mass social engineering of the early twentieth century and interpersonal hacker social engineering of the 1970s, converging today into what they call “masspersonal social engineering.” As Gehl and Lawson trace contemporary manipulative communication back to earlier forms of social engineering, possibilities for amelioration become clearer.

The authors show how specific manipulative communication practices are a mixture of information gathering, deception, and truth-indifferent statements, all with the instrumental goal of getting people to take actions the social engineer wants them to. Yet the term “fake news,” they claim, reduces everything to a true/false binary that fails to encompass the complexity of manipulative communication or to map onto many of its practices. They pay special attention to concepts and terms used by hacker social engineers, including the hacker concept of “bullshitting,” which the authors describe as a truth-indifferent mix of deception, accuracy, and sociability. They conclude with recommendations for how society can undermine masspersonal social engineering and move toward healthier democratic deliberation…(More)”.