Coding Democracy: How Hackers Are Disrupting Power, Surveillance, and Authoritarianism


Book by Maureen Webb: “Hackers have a bad reputation, as shady deployers of bots and destroyers of infrastructure. In Coding Democracy, Maureen Webb offers another view. Hackers, she argues, can be vital disruptors. Hacking is becoming a practice, an ethos, and a metaphor for a new wave of activism in which ordinary citizens are inventing new forms of distributed, decentralized democracy for a digital era. Confronted with concentrations of power, mass surveillance, and authoritarianism enabled by new technology, the hacking movement is trying to “build out” democracy into cyberspace.

Webb travels to Berlin, where she visits the Chaos Communication Camp, a flagship event in the hacker world; to Silicon Valley, where she reports on the Apple-FBI case, the significance of Russian troll farms, and the hacking of tractor software by desperate farmers; to Barcelona, to meet the hacker group XNet, which has helped bring nearly 100 prominent Spanish bankers and politicians to justice for their role in the 2008 financial crisis; and to Harvard and MIT, to investigate the institutionalization of hacking. Webb describes an amazing array of hacker experiments that could dramatically change the current political economy. These ambitious hacks aim to displace such tech monoliths as Facebook and Amazon; enable worker cooperatives to kill platforms like Ubergive people control over their data; automate trust; and provide citizens a real say in governance, along with capacity to reach consensus. Coding Democracy is not just another optimistic declaration of technological utopianism; instead, it provides the tools for an urgently needed upgrade of democracy in the digital era….(More)”.

Data Literacy in Government: How Are Agencies Enhancing Data Skills?


Randy Barrett at FedTech: “The federal government is vast, and the challenge of understanding its oceans of data grows daily. Rather than hiring thousands of new experts, agencies are moving to train existing employees on how to handle the new frontier.

Data literacy is now a common buzzword, spurred by the publication of the Federal Data Strategy 2020 Action Plan last year and the growing empowerment of chief data officers in the government. The document outlines a multiyear, holistic approach to government information that includes building a culture that values data, encouraging strong management and protection and promoting its efficient and appropriate use.

“While the Federal government leads globally in many instances in developing and providing data about the United States and the world, it lacks a robust, integrated approach to using data to deliver on mission, serve the public and steward resources,” the plan notes.

A key pillar of the plan is to “identify opportunities to increase staff data skills,” and it directs all federal agencies to undertake a gap analysis of skills to see where the weaknesses and needs lie….

The Department of Health and Human Services launched its Data Science CoLab in 2017 to boost basic and intermediate data skills. The collaborative program is the first try at a far reaching and cohort-based data-skills training for the agency. In addition to data analytics skills, HHS is currently training hundreds of employees on how to write Python and R.

“Demand for a seat in the Data Science CoLab has grown approximately 800 percent in the past three years, a testament to its success,” says Bishen Singh, a senior adviser in the Office of the Assistant Secretary for Health. “Beyond skill growth, it has led to incredible time and cost savings, as well as internal career growth for past participants across the department.”

The National Science Foundation was less successful with its Data Science and Data Certification Pilot, which had a class of 10 participants from various federal agencies. The workers were trained in advanced analytics techniques, with a focus on applying data tools to uncover meaning and solve Big Data challenges. However, the vendor curriculum used general data sets rather than agency-specific ones.

“As a result, participants found it more difficult to apply their learnings directly to real-world scenarios,” notes the CDO Council’s “Data Skill Training Program: Case Studies” report. The learning modules were mostly virtual and self-paced. Communication was poor with the vendor, and employees began to lag in completing their coursework. The pilot was discontinued.

Most of the training pilot programs were launched as the pandemic closed down government offices. The shift to virtual learning made progress difficult for some students. Another key lesson: Allow workers to use their new skills quickly, while they’re fresh….(More)”.

What Should Happen to Our Data When We Die?


Adrienne Matei at the New York Times: “The new Anthony Bourdain documentary, “Roadrunner,” is one of many projects dedicated to the larger-than-life chef, writer and television personality. But the film has drawn outsize attention, in part because of its subtle reliance on artificial intelligence technology.

Using several hours of Mr. Bourdain’s voice recordings, a software company created 45 seconds of new audio for the documentary. The A.I. voice sounds just like Mr. Bourdain speaking from the great beyond; at one point in the movie, it reads an email he sent before his death by suicide in 2018.

“If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Morgan Neville, the director, said in an interview with The New Yorker. “We can have a documentary-ethics panel about it later.”

The time for that panel may be now. The dead are being digitally resurrected with growing frequency: as 2-D projections, 3-D holograms, C.G.I. renderings and A.I. chat bots….(More)”.

An Obsolete Paradigm


Blogpost by Paul Wormelli: “…Our national system of describing the extent of crime in the U.S. is broken beyond repair and deserves to be replaced by a totally new paradigm (system). 

Since 1930, we have relied on the metrics generated by the Uniform Crime Reporting (UCR) Program to describe crime in the U.S., but it simply does not do so, even with its evolution into the National Incident-Based Reporting System (NIBRS). Criminologists have long recognized the limited scope of the UCR summary crime data, leading to the creation of the National Crime Victimization Survey (NCVS) and other supplementary crime data measurement vehicles. However, despite these measures, the United States still has no comprehensive national data on the amount of crime that has occurred. Even after decades of collecting data, the 1968 Presidential Crime Commission report on the Challenge of Crime in a Free Society lamented the absence of sound and complete data on crime in the U.S., and called for the creation of a National Crime Survey (NCS) that eventually led to the creation of the NCVS. Since then, we have slowly attempted to make improvements that will lead to more robust data. Only in 2021 did the FBI end UCR summary-based crime data collection and move to NIBRS crime data collection on a national scale.

Admittedly, the shift to NIBRS will unleash a sea change in how we analyze crime data and use it for decision making. However, it still lacks the completeness of national crime reporting. In the landmark study of the National Academy of Sciences Committee on Statistics (funded by the FBI and the Bureau of Justice Statistics to make recommendations on modernizing crime statistics), the panel members grappled with this reality and called out the absence of national statistics on crime that would fully inform policymaking on this critical subject….(More)”

The coloniality of collaboration: sources of epistemic obedience in data-intensive astronomy in Chile


Paper by Sebastián Lehuedé: “Data collaborations have gained currency over the last decade as a means for data- and skills-poor actors to thrive as a fourth paradigm takes hold in the sciences. Against this backdrop, this article traces the emergence of a collaborative subject position that strives to establish reciprocal and technical-oriented collaborations so as to catch up with the ongoing changes in research.

Combining insights from the modernity/coloniality group, political theory and science and technology studies, the article argues that this positionality engenders epistemic obedience by bracketing off critical questions regarding with whom and for whom knowledge is generated. In particular, a dis-embedding of the data producers, the erosion of local ties, and a data conformism are identified as fresh sources of obedience impinging upon the capacity to conduct research attuned to the needs and visions of the local context. A discursive-material analysis of interviews and field notes stemming from the case of astronomy data in Chile is conducted, examining the vision of local actors aiming to gain proximity to the mega observatories producing vast volumes of data in the Atacama Desert.

Given that these observatories are predominantly under the control of organisations from the United States and Europe, the adoption of a collaborative stance is now seen as the best means to ensure skills and technology transfer to local research teams. Delving into the epistemological dimension of data colonialism, this article warns that an increased emphasis on collaboration runs the risk of reproducing planetary hierarchies in times of data-intensive research….(More)”.

Human behaviour: what scientists have learned about it from the pandemic


Stephen Reicher at The Conversation: “During the pandemic, a lot of assumptions were made about how people behave. Many of those assumptions were wrong, and they led to disastrous policies.

Several governments worried that their pandemic restrictions would quickly lead to “behavioural fatigue” so that people would stop adhering to restrictions. In the UK, the prime minister’s former chief adviser Dominic Cummings recently admitted that this was the reason for not locking down the country sooner.

Meanwhile, former health secretary Matt Hancock revealed that the government’s failure to provide financial and other forms of support for people to self-isolate was down to their fear that the system “might be gamed”. He warned that people who tested positive may then falsely claim that they had been in contact with all their friends, so they could all get a payment.

These examples show just how deeply some governments distrust their citizens. As if the virus was not enough, the public was portrayed as an additional part of the problem. But is this an accurate view of human behaviour?

The distrust is based on two forms of reductionism – describing something complex in terms of its fundamental constituents. The first is limiting psychology to the characteristics – and more specifically the limitations – of individual minds. In this view the human psyche is inherently flawed, beset by biases that distort information. It is seen as incapable of dealing with complexity, probability and uncertainty – and tending to panic in a crisis.

This view is attractive to those in power. By emphasising the inability of people to govern themselves, it justifies the need for a government to look after them. Many governments subscribe to this view, having established so-called nudge units – behavioural science teams tasked with subtly manipulating people to make the “right” decisions, without them realising why, from eating less sugar to filing their taxes on time. But it is becoming increasingly clear that this approach is limited. As the pandemic has shown, it is particularly flawed when it comes to behaviour in a crisis.

In recent years, research has shown that the notion of people panicking in a crisis is something of a myth. People generally respond to crises in a measured and orderly way – they look after each other.

The key factor behind this behaviour is the emergence of a sense of shared identity. This extension of the self to include others helps us care for those around us and expect support from them. Resilience cannot be reduced to the qualities of individual people. It tends to be something that emerges in groups.

Another type of reductionism that governments adopt is “psychologism” – when you reduce the explanation of people’s behaviour to just psychology…(More)”.

The miracle of the commons


Chapter by Michelle Nijhuis: “In December 1968, the ecologist and biologist Garrett Hardin had an essay published in the journal Science called ‘The Tragedy of the Commons’. His proposition was simple and unsparing: humans, when left to their own devices, compete with one another for resources until the resources run out. ‘Ruin is the destination toward which all men rush, each pursuing his own best interest,’ he wrote. ‘Freedom in a commons brings ruin to all.’ Hardin’s argument made intuitive sense, and provided a temptingly simple explanation for catastrophes of all kinds – traffic jams, dirty public toilets, species extinction. His essay, widely read and accepted, would become one of the most-cited scientific papers of all time.

Even before Hardin’s ‘The Tragedy of the Commons’ was published, however, the young political scientist Elinor Ostrom had proven him wrong. While Hardin speculated that the tragedy of the commons could be avoided only through total privatisation or total government control, Ostrom had witnessed groundwater users near her native Los Angeles hammer out a system for sharing their coveted resource. Over the next several decades, as a professor at Indiana University Bloomington, she studied collaborative management systems developed by cattle herders in Switzerland, forest dwellers in Japan, and irrigators in the Philippines. These communities had found ways of both preserving a shared resource – pasture, trees, water – and providing their members with a living. Some had been deftly avoiding the tragedy of the commons for centuries; Ostrom was simply one of the first scientists to pay close attention to their traditions, and analyse how and why they worked.

The features of successful systems, Ostrom and her colleagues found, include clear boundaries (the ‘community’ doing the managing must be well-defined); reliable monitoring of the shared resource; a reasonable balance of costs and benefits for participants; a predictable process for the fast and fair resolution of conflicts; an escalating series of punishments for cheaters; and good relationships between the community and other layers of authority, from household heads to international institutions….(More)”.

Household Financial Transaction Data


Paper by Scott R. Baker & Lorenz Kueng: “The growth of the availability and use of detailed household financial transaction microdata has dramatically expanded the ability of researchers to understand both household decision-making as well as aggregate fluctuations across a wide range of fields. This class of transaction data is derived from a myriad of sources including financial institutions, FinTech apps, and payment intermediaries. We review how these detailed data have been utilized in finance and economics research and the benefits they enable beyond more traditional measures of income, spending, and wealth. We discuss the future potential for this flexible class of data in firm-focused research, real-time policy analysis, and macro statistics….(More)”.

The Inevitable Weaponization of App Data Is Here


Joseph Cox at VICE: “…After years of warning from researchers, journalists, and even governments, someone used highly sensitive location data from a smartphone app to track and publicly harass a specific person. In this case, Catholic Substack publication The Pillar said it used location data ultimately tied to Grindr to trace the movements of a priest, and then outed him publicly as potentially gay without his consent. The Washington Post reported on Tuesday that the outing led to his resignation….

The data itself didn’t contain each mobile phone user’s real name, but The Pillar and its partner were able to pinpoint which device belonged to Burill by observing one that appeared at the USCCB staff residence and headquarters, locations of meetings that he was in, as well as his family lake house and an apartment that has him listed as a resident. In other words, they managed to, as experts have long said is easy to do, unmask this specific person and their movements across time from an supposedly anonymous dataset.

A Grindr spokesperson told Motherboard in an emailed statement that “Grindr’s response is aligned with the editorial story published by the Washington Post which describes the original blog post from The Pillar as homophobic and full of unsubstantiated inuendo. The alleged activities listed in that unattributed blog post are infeasible from a technical standpoint and incredibly unlikely to occur. There is absolutely no evidence supporting the allegations of improper data collection or usage related to the Grindr app as purported.”…

“The research from The Pillar aligns to the reality that Grindr has historically treated user data with almost no care or concern, and dozens of potential ad tech vendors could have ingested the data that led to the doxxing,” Zach Edwards, a researcher who has closely followed the supply chain of various sources of data, told Motherboard in an online chat. “No one should be doxxed and outed for adult consenting relationships, but Grindr never treated their own users with the respect they deserve, and the Grindr app has shared user data to dozens of ad tech and analytics vendors for years.”…(More)”.

Foreign Policy by Canadians: a unique national experiment


Blogpost by James Fishkin: “…Foreign Policy by Canadians was a national field experiment (with a control group that was not invited to deliberate, but which answered the same questions before and after.) The participants and the control group matched up almost perfectly before deliberation, but after deliberation, the participants had reached their considered judgments (while the control group had hardly changed at all). YouGov recruited and surveyed an excellent sample of deliberators, nationally representative in demographics and attitudes (as judged by comparison to the control groups). The project was an attempt to use social science to give an informed and representative input to policy. It was particularly challenging in that foreign policy is an area where most of the public is less engaged and informed even than it is on domestic issues (outside of times of war or severe international crises). Hence, we would argue that Deliberative Polling is particularly appropriate as a form of public input on these topics.

This project was also distinctive in some other ways. First, all the small group discussions by the 444 nationally representative deliberators were conducted via our new video based automated moderator platform. Developed here at Stanford with Professor Ashish Goel and “Crowdsourced Democracy Team” in Management Science and Engineering, it facilitates many small groups of ten or so to self-moderate their discussions. It controls access to the queue for the microphone (limiting each contribution to 45 seconds), it orchestrates the discussion to move from one policy proposal to the next on the list, it periodically asks the participants if they have covered both the arguments in favor and against the proposal, it intervenes if people are being uncivil (a rare occurrence in these dialogues) and it guides the group into formulating its questions for the plenary session experts. This was only the second national application of the online platform (the first was in Chile this past year) and it was the first as a controlled experiment.

A second distinctive aspect of Foreign Policy by Canadians is that the agenda was formulated in both a top-down and a bottom-up manner. While a distinguished advisory group offered input on what topics were worth exploring and on the balance and accuracy of the materials, those materials were also vetted by chapters of the Canadian International Council in different parts of the country. Those meetings deliberated about how the draft materials could be improved. What was left out? Were the most important arguments on either side presented? The meetings of CIC chapters agreed on recommendations for revision and those recommendations were reflected in the final documents and proposals for discussion. I think this is “deliberative crowdsourcing” because the groups had to agree on their most important recommendations based on shared discussion. These meetings were also conducted with our automated deliberation platform….(More)”.