Torin Monahan and David Murakami Wood’s Surveillance Studies is a broad-ranging reader that provides a comprehensive overview of the dynamic field. In fifteen sections, the book features selections from key historical and theoretical texts, samples of the best empirical research done on surveillance, introductions to debates about privacy and power, and cutting-edge treatments of art, film, and literature. While the disciplinary perspectives and foci of scholars in surveillance studies may be diverse, there is coherence and agreement about core concepts, ideas, and texts. This reader outlines these core dimensions and highlights various differences and tensions. In addition to a thorough introduction that maps the development of the field, the volume offers helpful editorial remarks for each section and brief prologues that frame the included excerpts. …(More)”.
Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?
Paper by H.G. (Haiko)van der Voort et al: “Big data promises to transform public decision-making for the better by making it more responsive to actual needs and policy effects. However, much recent work on big data in public decision-making assumes a rational view of decision-making, which has been much criticized in the public administration debate.
In this paper, we apply this view, and a more political one, to the context of big data and offer a qualitative study. We question the impact of big data on decision-making, realizing that big data – including its new methods and functions – must inevitably encounter existing political and managerial institutions. By studying two illustrative cases of big data use processes, we explore how these two worlds meet. Specifically, we look at the interaction between data analysts and decision makers.
In this we distinguish between a rational view and a political view, and between an information logic and a decision logic. We find that big data provides ample opportunities for both analysts and decision makers to do a better job, but this doesn’t necessarily imply better decision-making, because big data also provides opportunities for actors to pursue their own interests. Big data enables both data analysts and decision makers to act as autonomous agents rather than as links in a functional chain. Therefore, big data’s impact cannot be interpreted only in terms of its functional promise; it must also be acknowledged as a phenomenon set to impact our policymaking institutions, including their legitimacy….(More)”.
Houston’s $6 Billion Census Problem: Frightened Immigrants
Natasha Rausch at Bloomberg: “At Houston’s City Hall last week, Mayor Sylvester Turner gathered with company CEOs, university professors, police officers, politicians and local judges to discuss a $6 billion problem they all have in common: the 2020 census.
City officials and business leaders are worried about people like 21-year-old Ana Espinoza, a U.S. citizen by birth who lives with undocumented relatives. Espinoza has no intention of answering the census because she worries it could expose her family and get them deported….
Getting an accurate count has broad economic implications across the city, said Laura Murillo, chief executive officer of the Hispanic Chamber. “For everyone, the census is important. It doesn’t matter if you’re a Republican or Democrat, black or white or green.”…
For growing businesses, the census is crucial for understanding the population they’re serving in different regions. Enterprise Rent-A-Car used the 2010 census to help diversify the company’s employee base. The data prompted Enterprise to staff a new location in Houston with Spanish-speaking employees to better serve area customers, said the company’s human resources manager Phil Dyson.
“It’s been one of our top locations,” he said.
Doing the Math
Texas stands to lose at least $1,161 in federal funding for each person not counted, according to a March report by Andrew Reamer, a research professor at the George Washington Institute of Public Policy. Multiplied by the estimated 506,000 unathorized immigrants who live in the nation’s fourth-largest city, that puts at stake about $6 billion for Houston over the 10 years the census applies.
That’s just for programs such as Medicare and Medicaid. The potential loss is even larger when grants are taken into account for items like highways and community development, he said…(More)”.
The causal effect of trust
Paper by Björn Bartling, Ernst Fehr, David Huffman and Nick Netzer: “Trust affects almost all human relationships – in families, organizations, markets and politics. However, identifying the conditions under which trust, defined as people’s beliefs in the trustworthiness of others, has a causal effect on the efficiency of human interactions has proven to be difficult. We show experimentally and theoretically that trust indeed has a causal effect. The duration of the effect depends, however, on whether initial trust variations are supported by multiple equilibria.
We study a repeated principal-agent game with multiple equilibria and document empirically that an efficient equilibrium is selected if principals believe that agents are trustworthy, while players coordinate on an inefficient equilibrium if principals believe that agents are untrustworthy. Yet, if we change the institutional environment such that there is a unique equilibrium, initial variations in trust have short-run effects only. Moreover, if we weaken contract enforcement in the latter environment, exogenous variations in trust do not even have a short-run effect. The institutional environment thus appears to be key for whether trust has causal effects and whether the effects are transient or persistent…(More)”.
How Big Tech Is Working With Nonprofits and Governments to Turn Data Into Solutions During Disasters
Kelsey Sutton at Adweek: “As Hurricane Michael approached the Florida Panhandle, the Florida Division of Emergency Management tapped a tech company for help.
Over the past year, Florida’s DEM has worked closely with GasBuddy, a Boston-based app that uses crowdsourced data to identify fuel prices and inform first responders and the public about fuel availability or power outages at gas stations during storms. Since Hurricane Irma in 2017, GasBuddy and DEM have worked together to survey affected areas, helping Florida first responders identify how best to respond to petroleum shortages. With help from the location intelligence company Cuebiq, GasBuddy also provides estimated wait times at gas stations during emergencies.
DEM first noticed GasBuddy’s potential in 2016, when the app was collecting and providing data about fuel availability following a pipeline leak.
“DEM staff recognized how useful such information would be to Florida during any potential future disasters, and reached out to GasBuddy staff to begin a relationship,” a spokesperson for the Florida State Emergency Operations Center explained….

Stefaan Verhulst, co-founder and chief research and development officer at the Governance Laboratory at New York University, advocates for private corporations to partner with public institutions and NGOs. Private data collected by corporations is richer, more granular and more up-to-date than data collected through traditional social science methods, making that data useful for noncorporate purposes like research, Verhulst said. “Those characteristics are extremely valuable if you are trying to understand how society works,” Verhulst said….(More)”.
Declaration on Ethics and Data Protection in Artifical Intelligence
Declaration: “…The 40th International Conference of Data Protection and Privacy Commissioners considers that any creation, development and use of artificial intelligence systems shall fully respect human rights, particularly the rights to the protection of personal data and to privacy, as well as human dignity, non-discrimination and fundamental values, and shall provide solutions to allow individuals to maintain control and understanding of artificial intelligence systems.
The Conference therefore endorses the following guiding principles, as its core values to preserve human rights in the development of artificial intelligence:
- Artificial intelligence and machine learning technologies should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle, in particular by:
- Considering individuals’ reasonable expectations by ensuring that the use of artificial intelligence systems remains consistent with their original purposes, and that the data are used in a way that is not incompatible with the original purpose of their collection,
- taking into consideration not only the impact that the use of artificial intelligence may have on the individual, but also the collective impact on groups and on society at large,
- ensuring that artificial intelligence systems are developed in a way that facilitates human development and does not obstruct or endanger it, thus recognizing the need for delineation and boundaries on certain uses,…(More)
When AI Misjudgment Is Not an Accident
Douglas Yeung at Scientific American: “The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.
But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.
According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.
Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.
Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.
Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions….(More)”.
The Lack of Decentralization of Data: Barriers, Exclusivity, and Monopoly in Open Data
Paper by Carla Hamida and Amanda Landi: “Recently, Facebook creator Mark Zuckerberg was on trial for the misuse of personal data. In 2013, the National Security Agency was exposed by Edward Snowden for invading the privacy of inhabitants of the United States by examining personal data. We see in the news examples, like the two just described, of government agencies and private companies being less than truthful about their use of our data. A related issue is that these same government agencies and private companies do not share their own data, and this creates the openness of data problem.
Government, academics, and citizens can play a role in making data more open. In the present, there are non-profit organizations that research data openness, such as OpenData Charter, Global Open Data Index, and Open Data Barometer. These organizations have different methods on measuring openness of data, so this leads us to question what does open data mean, how does one measure how open data is and who decides how open should data be, and to what extent society is affected by the availability, or lack of availability, of data. In this paper, we explore these questions with an examination of two of the non-profit organizations that study the open data problem extensively….(More)”.
This is how computers “predict the future”
Dan Kopf at Quartz: “The poetically named “random forest” is one of data science’s most-loved prediction algorithms. Developed primarily by statistician Leo Breiman in the 1990s, the random forest is cherished for its simplicity. Though it is not always the most accurate prediction method for a given problem, it holds a special place in machine learning because even those new to data science can implement and understand this powerful algorithm.
This was the algorithm used in an exciting 2017 study on suicide predictions, conducted by biomedical-informatics specialist Colin Walsh of Vanderbilt University and psychologists Jessica Ribeiro and Joseph Franklin of Florida State University. Their goal was to take what they knew about a set of 5,000 patients with a history of self-injury, and see if they could use those data to predict the likelihood that those patients would commit suicide. The study was done retrospectively. Sadly, almost 2,000 of these patients had killed themselves by the time the research was underway.
Altogether, the researchers had over 1,300 different characteristics they could use to make their predictions, including age, gender, and various aspects of the individuals’ medical histories. If the predictions from the algorithm proved to be accurate, the algorithm could theoretically be used in the future to identify people at high risk of suicide, and deliver targeted programs to them. That would be a very good thing.
Predictive algorithms are everywhere. In an age when data are plentiful and computing power is mighty and cheap, data scientists increasingly take information on people, companies, and markets—whether given willingly or harvested surreptitiously—and use it to guess the future. Algorithms predict what movie we might want to watch next, which stocks will increase in value, and which advertisement we’re most likely to respond to on social media. Artificial-intelligence tools, like those used for self-driving cars, often rely on predictive algorithms for decision making….(More)”.
The role of blockchain, cryptoeconomics, and collective intelligence in building the future of justice
Blog by Federico Ast at Thomson Reuters: “Human communities of every era have had to solve the problem of social order. For this, they developed governance and legal systems. They did it with the technologies and systems of belief of their time….
A better justice system may not come from further streamlining existing processes but from fundamentally rethinking them from a first principles perspective.
In the last decade, we have witnessed how collective intelligence could be leveraged to produce an encyclopaedia like Wikipedia, a transport system like Uber, a restaurant rating system like Yelp!, and a hotel system like Airbnb. These companies innovated by crowdsourcing value creation. Instead of having an in-house team of restaurant critics as the Michelin Guide, Yelp! crowdsourced ratings in users.
Satoshi Nakamoto’s invention of Bitcoin (and the underlying blockchain technology) may be seen as the next step in the rise of the collaborative economy. The Bitcoin Network proved that, given the right incentives, anonymous users could cooperate in creating and updating a distributed ledger which could act as a monetary system. A nationless system, inherently global, and native to the Internet Age.
Cryptoeconomics is a new field of study that leverages cryptography, computer science and game theory to build secure distributed systems. It is the science that underlies the incentive system of open distributed ledgers. But its potential goes well beyond cryptocurrencies.
Kleros is a dispute resolution system which relies on cryptoeconomics. It uses a system of incentives based on “focal points”, a concept developed by game theorist Thomas Schelling, winner of the Nobel Prize in Economics 2005. Using a clever mechanism design, it seeks to produce a set of incentives for randomly selected users to adjudicate different types of disputes in a fast, affordable and secure way. Users who adjudicate disputes honestly will make money. Users who try to abuse the system will lose money.
Kleros does not seek to compete with governments or traditional arbitration systems, but provide a new method that will leverage the wisdom of the crowd to resolve many disputes of the global digital economy for which existing methods fall short: e-commerce, crowdfunding and many types of small claims are among the early adopters….(More)”.