Christopher M. Kelty at Current Anthropology: “Participation is a concept and practice that governs many aspects of new media and new publics. There are a wide range of attempts to create more of it and a surprising lack of theorization. In this paper I attempt to present a “grammar” of participation by looking at three cases where participation has been central in the contemporary moment of new, social media and the Internet as well as in the past, stretching back to the 1930s: citizen participation in public administration, workplace participation, and participatory international development. Across these three cases I demonstrate that the grammar of participation shifts from a language of normative enthusiasm to one of critiques of co-optation and bureaucratization and back again. I suggest that this perpetually aspirational logic results in the problem of “too much democracy in all the wrong places.”…(More)”
Scaling accountability through vertically integrated civil society policy monitoring and advocacy
Working paper by Jonathan Fox: “…argues that the growing field of transparency, participation and accountability (TPA) needs a conceptual reboot, to address the limited traction gained so far on the path to accountability. To inform more strategic approaches and to identify the drivers of more sustainable institutional change, fresh analytical work is needed.
The paper makes the case for one among several possible strategic approaches by distinguishing between ‘scaling up’ and ‘taking scale into account’, going on to examine several different ways that ‘scale’ is used in different fields.
It goes on to explain and discuss the strategy of vertical integration, which involves multi-level coordination by civil society organisations of policy monitoring and advocacy, grounded in broad pro-accountability constituencies. Vertical integration is discussed from several different angles, from its roots in politcal economy to its relationship with citizen voice, its capacity for multi-directional communication, and its relationship with feedback loops.
To spell out how this strategy can empower pro accountability actors, the paper contrasts varied terms of engagement between state and society, proposing a focus on collaborative coalitions as an alternative to the conventional dichotomy between confrontation and constructive engagement.
The paper continues by reviewing existing multi-level approaches, summarising nine cases – three each in the Philippines, Mexico and India – to demonstrate what can be revealed when TPA initiatives are seen through the lens of scale.
It concludes with a set of broad analytical questions for discussion, followed by testable hypotheses proposed to inform future research agendas.(Download the paper here, and a short summary here)…(More)”
OpenStreetMap in Israel and Palestine – ‘Game changer’ or reproducer of contested cartographies?
Christian Bittner in Political Geography: “In Israel and Palestine, map-making practices were always entangled with contradictive spatial identities and imbalanced power resources. Although an Israeli narrative has largely dominated the ‘cartographic battlefield’, the latest chapter of this story has not been written yet: collaborative forms of web 2.0 cartographies have restructured power relations in mapping practices and challenged traditional monopolies on map and spatial data production. Thus, we can expect web 2.0 cartographies to be a ‘game changer’ for cartography in Palestine and Israel.
In this paper, I review this assumption with the popular example of OpenStreetMap (OSM). Following a mixed methods approach, I comparatively analyze the genesis of OSM in Israel and Palestine. Although nationalist motives do not play a significant role on either side, it turns out that the project is dominated by Israeli and international mappers, whereas Palestinians have hardly contributed to OSM. As a result, social fragmentations and imbalances between Israel and Palestine are largely reproduced through OSM data. Discussing the low involvement of Palestinians, I argue that OSM’s ground truth paradigm might be a watershed for participation. Presumably, the project’s data are less meaningful in some local contexts than in others. Moreover, the seemingly apolitical approach to map only ‘facts on the ground’ reaffirms present spatio-social order and thus the power relations behind it. Within a Palestinian narrative, however, many aspects of the factual material space might appear not as neutral physical objects but as results of suppression, in which case, any ‘accurate’ spatial representation, such as OSM, becomes objectionable….(More)”
The “Open Government Reform” Movement
Paper by Suzanne J. Piotrowski on “The Case of the Open Government Partnership and U.S. Transparency Policies”: “Open government initiatives, which include not only transparency but also participation and collaboration policies, have become a major administrative reform. As such, these initiatives are gaining cohesiveness in literature. President Obama supported open government through a range of policies including the Open Government Partnership (OGP), a multinational initiative. The OGP requires member organizations to develop open government national action plans, which are used as the basis for my analysis. To frame this paper, I use and expand upon David Heald’s directions and varieties of transparency framework. A content analysis of the 62 commitments in the US Second Open Government National Action Plan was conducted. The analysis provides two findings of note: First, the traditional view of transparency was indeed the most prevalent in the policies proposed. In that respect, not much has changed, even with the OGP’s emphasis on a range moof approaches. Second, openness among and between agencies played a larger than expected role. While the OGP pushed an array of administrative reforms, the initiative had limited impact on the type of policies that were proposed and enacted. In sum, the OGP is an administrative reform that was launched with great fanfare, but limited influence in the US context. More research needs to be conducted to determine is the “open government reform” movement as a whole suffers from such problems in implementation….(More)”
Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment
More)”
I conduct an experiment which examines the impact of group norm promotion and social sanctioning on racist online harassment. Racist online harassment de-mobilizes the minorities it targets, and the open, unopposed expression of racism in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur. This paper extends findings from lab experiments to a naturalistic setting using an objective, behavioral outcome measure and a continuous 2-month data collection period. This represents an advance in the study of prejudiced behavior….(Comparing resistance to open data performance measurement
Paper by Gregory Michener and Otavio Ritter in Public Administration : “Much is known about governmental resistance to disclosure laws, less so about multi-stakeholder resistance to open data. This study compares open data initiatives within the primary and secondary school systems of Brazil and the UK, focusing on stakeholder resistance and corresponding policy solutions. The analytical framework is based on the ‘Three-Ps’ of open data resistance to performance metrics, corresponding to professional, political, and privacy-related concerns. Evidence shows that resistance is highly nuanced, as stakeholders alternately serve as both principals and agents. School administrators, for example, are simultaneously principals to service providers and teachers, and at once agents to parents and politicians. Relying on a different systems comparison, in-depth interviews, and newspaper content analyses, we find that similar stakeholders across countries demonstrate strikingly divergent levels of resistance. In overcoming stakeholder resistance – across socioeconomic divides – context conscientious ‘data-informed’ evaluations may promote greater acceptance than narrowly ‘data-driven’ performance measurements…(More)”
Is Government Really Broken?
Cary Coglianese: “The widespread public angst that surfaced in the 2016 presidential election revealed how many Americans believe their government has become badly broken. Given the serious problems that continue to persist in society — crime, illiteracy, unemployment, poverty, discrimination, to name a few — widespread beliefs in a governmental breakdown are understandable. Yet such a breakdown is actually far from self-evident. In this paper, I explain how diagnoses of governmental performance depend on the perspective from which current conditions in the country are viewed. Certainly when judged against a standard of perfection, America has a long way to go. But perfection is no meaningful basis upon which to conclude that government has broken down. I offer and assess three alternative, more realistic benchmarks of government’s performance: (1) reliance on a standard of acceptable imperfection; (2) comparisons with other countries or time periods; and (3) the use of counterfactual inferences. Viewed from these perspectives, the notion of an irreparable governmental failure in the United States becomes quite questionable. Although serious economic and social shortcomings certainly exist, the nation’s strong economy and steadily improving living conditions in recent decades simply could not have occurred if government were not functioning well. Rather than embracing despair and giving in to cynicism and resignation, citizens and their leaders would do better to treat the nation’s problems as conditions of disrepair needing continued democratic engagement. It remains possible to achieve greater justice and better economic and social conditions for all — but only if we, the people, do not give up on the pursuit of these goals….(More)”
What’s wrong with big data?
James Bridle in the New Humanist: “In a 2008 article in Wired magazine entitled “The End of Theory”, Chris Anderson argued that the vast amounts of data now available to researchers made the traditional scientific process obsolete. No longer would they need to build models of the world and test them against sampled data. Instead, the complexities of huge and totalising datasets would be processed by immense computing clusters to produce truth itself: “With enough data, the numbers speak for themselves.” As an example, Anderson cited Google’s translation algorithms which, with no knowledge of the underlying structures of languages, were capable of inferring the relationship between them using extensive corpora of translated texts. He extended this approach to genomics, neurology and physics, where scientists are increasingly turning to massive computation to make sense of the volumes of information they have gathered about complex systems. In the age of big data, he argued, “Correlation is enough. We can stop looking for models.”
This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.
“Big data” is not merely a business buzzword, but a way of seeing the world. Driven by technology, markets and politics, it has come to determine much of our thinking, but it is flawed and dangerous. It runs counter to our actual findings when we employ such technologies honestly and with the full understanding of their workings and capabilities. This over-reliance on data, which I call “quantified thinking”, has come to undermine our ability to reason meaningfully about the world, and its effects can be seen across multiple domains.
The assertion is hardly new. Writing in the Dialectic of Enlightenment in 1947, Theodor Adorno and Max Horkheimer decried “the present triumph of the factual mentality” – the predecessor to quantified thinking – and succinctly analysed the big data fallacy, set out by Anderson above. “It does not work by images or concepts, by the fortunate insights, but refers to method, the exploitation of others’ work, and capital … What men want to learn from nature is how to use it in order wholly to dominate it and other men. That is the only aim.” What is different in our own time is that we have built a world-spanning network of communication and computation to test this assertion. While it occasionally engenders entirely new forms of behaviour and interaction, the network most often shows to us with startling clarity the relationships and tendencies which have been latent or occluded until now. In the face of the increased standardisation of knowledge, it becomes harder and harder to argue against quantified thinking, because the advances of technology have been conjoined with the scientific method and social progress. But as I hope to show, technology ultimately reveals its limitations….
“Eroom’s law” – Moore’s law backwards – was recently formulated to describe a problem in pharmacology. Drug discovery has been getting more expensive. Since the 1950s the number of drugs approved for use in human patients per billion US dollars spent on research and development has halved every nine years. This problem has long perplexed researchers. According to the principles of technological growth, the trend should be in the opposite direction. In a 2012 paper in Nature entitled “Diagnosing the decline in pharmaceutical R&D efficiency” the authors propose and investigate several possible causes for this. They begin with social and physical influences, such as increased regulation, increased expectations and the exhaustion of easy targets (the “low hanging fruit” problem). Each of these are – with qualifications – disposed of, leaving open the question of the discovery process itself….(More)
Open Data Collection (PLOS)
To create this Open Data Collection, we exhaustively searched for relevant articles published across PLOS that discuss open data in some way. Then, in collaboration with our external advisor, Melissa Haendel, we have selected 26 of those articles with the aim to highlight a broad scope of research articles, guidelines, and commentaries about data sharing, data practices, and data policies from different research fields. Melissa has written an engaging blog post detailing the rubric and reasons behind her selections….(More)”
Understanding the four types of AI, from reactive robots to self-aware beings
The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.
Type I AI: Reactive machines
The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.
Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.
But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.
This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….
Type II AI: Limited memory
This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.
These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.
But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;
Type III AI: Theory of mind
We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.
Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.
This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.
If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.
Type IV AI: Self-awareness
The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….
While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”