Following Fenno: Learning from Senate Candidates in the Age of Social Media and Party Polarization


David C.W. Parker  at The Forum: “Nearly 40 years ago, Richard Fenno published Home Style, a seminal volume explaining how members of Congress think about and engage in the process of representation. To accomplish his task, he observed members of Congress as they crafted and communicated their representational styles to the folks back home in their districts. The book, and Fenno’s ensuing research agenda, served as a clarion call to move beyond sophisticated quantitative analyses of roll call voting and elite interviews in Washington, D.C. to comprehend congressional representation. Instead, Fenno argued, political scientists are better served by going home with members of Congress where “their perceptions of their constituencies are shaped, sharpened, or altered” (Fenno 1978, p. xiii). These perceptions of constituencies fundamentally shape what members of Congress do at home and in Washington. If members of Congress are single-minded seekers of reelection, as we often assume, then political scientists must begin with the constituent relationship essential to winning reelection. Go home, Fenno says, to understand Congress.

There are many ways constituency relationships can be understood and uncovered; the preferred method for Fenno is participant observation, which he variously terms as “soaking and poking” or “just hanging around.” Although it sounds easy enough to sit and watch, good participant observation requires many considerations (as Fenno details in a thorough appendix to Home Style). In this appendix, and in another series of essays, Fenno grapples forthrightly with the tough choices researchers must consider when watching and learning from politicians.

In this essay, I respond to Fenno’s thought-provoking methodological treatise in Home Style and the ensuing collection of musings he published as Watching Politicians: Essays on Participant Observation. I do so for three reasons: First, I wish to reinforce Fenno’s call to action. As the study of political science has matured, it has moved away from engaging with politicians in the field across the various sub-fields, favoring statistical analyses. “Everyone cites Fenno, but no one does Fenno,” I recently opined, echoing another scholar commenting on Fenno’s work (Fenno 2013, p. 2; Parker 2015, p. 246). Unfortunately, that sentiment is supported by data (Grimmer 2013, pp. 13–19; Curry 2017). Although quantitative and formal analyses have led to important insights into the study of political behavior and institutions, politics is as important to our discipline as science. And in politics, the motives and concerns of people are important to witness, not just because they add complexity and richness to our stories, but because they aid in theory generation.1 Fenno’s study was exploratory, but is full of key theoretical insights relevant to explaining how members of Congress understand their constituencies and the ensuing political choices they make.

Second, to “do” participant observation requires understanding the choices the methodology imposes. This necessitates that those who practice this method of discovery document and share their experiences (Lin 2000). The more the prospective participant observer can understand the size of the choice set she faces and the potential consequences at each decision point in advance, the better her odds of avoiding unanticipated consequences with both immediate and long-term research ramifications. I hope that adding my cumulative experiences to this ongoing methodological conversation will assist in minimizing both unexpected and undesirable consequences for those who follow into the field. Fenno is open about his own choices, and the difficult decisions he faced as a participant observer. Encouraging scholars to engage in participant observation is only half the battle. The other half is to encourage interested scholars to think about those same choices and methodological considerations, while acknowledging that context precludes a one-size fits all approach. Fenno’s choices may not be your choices – and that might be just fine depending upon your circumstances. Fenno would wholeheartedly agree.

Finally, Congress and American politics have changed considerably from when Fenno embarked on his research in Home Style. At the end of his introduction, Fenno writes that “this book is about the early to mid-1970s only. These years were characterized by the steady decline of strong national party attachments and strong local party organizations. … Had these conditions been different, House members might have behaved differently in their constituencies” (xv). Developments since Fenno put down his pen include political parties polarizing to an almost unprecedented degree, partisan attachments strengthening among voters, and technology emerging to change fundamentally how politicians engage with constituents. In light of this evolution of political culture in Washington and at home, it is worth considering the consequences for the participant-observation research approach. Many have asked me if it is still possible to do such work in the current political environment, and if so, what are the challenges facing political scientists going into the field? This essay provides some answers.

I proceed as follows: First, I briefly discuss my own foray into the world of participant observation, which occurred during the 2012 Senate race in Montana. Second, I consider two important methodological considerations raised by Fenno: access and participation as an observer. Third, I relate these two issues to a final consideration: the development of social media and the consequences of this for the participant observation enterprise. Finally, I show the perils of social science divorced from context, as demonstrated by the recent Stanford-Dartmouth mailer scandal. I conclude with not just a plea for us to pick up where Fenno has left off, but by suggesting that more thinking like a participant observer would benefit the discipline as whole by reminding us of our ethical obligations as researchers to each other, and to the political community that we study…(More)”.

Data Publics: Urban Protest, Analytics and the Courts


Article by Anthony McCosker and Timothy Graham in MC Journal: “There are many examples globally of the use of social media to engage publics in battles over urban development or similar issues (e.g. Fredericks and Foth). Some have asked how social media might be better used by neighborhood organisations to mobilise protest and save historic buildings, cultural landmarks or urban sites (Johnson and Halegoua). And we can only note here the wealth of research literature on social movements, protest and social media. To emphasise Gerbaudo’s point, drawing on Mattoni, we “need to account for how exactly the use of these media reshapes the ‘repertoire of communication’ of contemporary movements and affects the experience of participants” (2). For us, this also means better understanding the role that social data plays in both aiding and reshaping urban protest or arming third sector groups with evidence useful in social institutions such as the courts.

New modes of digital engagement enable forms of distributed digital citizenship, which Meikle sees as the creative political relationships that form through exercising rights and responsibilities. Associated with these practices is the transition from sanctioned, simple discursive forms of social protest in petitions, to new indicators of social engagement in more nuanced social media data and the more interactive forms of online petition platforms like change.org or GetUp (Halpin et al.). These technical forms code publics in specific ways that have implications for contemporary protest action. That is, they provide the operational systems and instructions that shape social actions and relationships for protest purposes (McCosker and Milne).

All protest and social movements are underwritten by explicit or implicit concepts of participatory publics as these are shaped, enhanced, or threatened by communication technologies. But participatory protest publics are uneven, and as Kelty asks: “What about all the people who are neither protesters nor Twitter users? In the broadest possible sense this ‘General Public’ cannot be said to exist as an actual entity, but only as a kind of virtual entity” (27). Kelty is pointing to the porous boundary between a general public and an organised public, or formal enterprise, as a reminder that we cannot take for granted representations of a public, or the public as a given, in relation to Like or follower data for instance.

If carefully gauged, the concept of data publics can be useful. To start with, the notions of publics and publicness are notoriously slippery. Baym and boyd explore the differences between these two terms, and the way social media reconfigures what “public” is. Does a Comment or a Like on a Facebook Page connect an individual sufficiently to an issues-public? As far back as the 1930s, John Dewey was seeking a pragmatic approach to similar questions regarding human association and the pluralistic space of “the public”. For Dewey, “the machine age has so enormously expanded, multiplied, intensified and complicated the scope of the indirect consequences [of human association] that the resultant public cannot identify itself” (157). To what extent, then, can we use data to constitute a public in relation to social protest in the age of data analytics?

There are numerous well formulated approaches to studying publics in relation to social media and social networks. Social network analysis (SNA) determines publics, or communities, through links, ties and clustering, by measuring and mapping those connections and to an extent assuming that they constitute some form of sociality. Networked publics (Ito, 6) are understood as an outcome of social media platforms and practices in the use of new digital media authoring and distribution tools or platforms and the particular actions, relationships or modes of communication they afford, to use James Gibson’s sense of that term. “Publics can be reactors, (re)makers and (re)distributors, engaging in shared culture and knowledge through discourse and social exchange as well as through acts of media reception” (Ito 6). Hashtags, for example, facilitate connectivity and visibility and aid in the formation and “coordination of ad hoc issue publics” (Bruns and Burgess 3). Gray et al., following Ruppert, argue that “data publics are constituted by dynamic, heterogeneous arrangements of actors mobilised around data infrastructures, sometimes figuring as part of them, sometimes emerging as their effect”. The individuals of data publics are neither subjugated by the logics and metrics of digital platforms and data structures, nor simply sovereign agents empowered by the expressive potential of aggregated data (Gray et al.).

Data publics are more than just aggregates of individual data points or connections. They are inherently unstable, dynamic (despite static analysis and visualisations), or vibrant, and ephemeral. We emphasise three key elements of active data publics. First, to be more than an aggregate of individual items, a data public needs to be consequential (in Dewey’s sense of issues or problem-oriented). Second, sufficient connection is visible over time. Third, affective or emotional activity is apparent in relation to events that lend coherence to the public and its prevailing sentiment. To these, we add critical attention to the affordising processes – or the deliberate and incidental effects of datafication and analysis, in the capacities for data collection and processing in order to produce particular analytical outcomes, and the data literacies these require. We return to the latter after elaborating on the Save the Palace case….(More)”.

Countries Can Learn from France’s Plan for Public Interest Data and AI


Nick Wallace at the Center for Data Innovation: “French President Emmanuel Macron recently endorsed a national AI strategy that includes plans for the French state to make public and private sector datasets available for reuse by others in applications of artificial intelligence (AI) that serve the public interest, such as for healthcare or environmental protection. Although this strategy fails to set out how the French government should promote widespread use of AI throughout the economy, it will nevertheless give a boost to AI in some areas, particularly public services. Furthermore, the plan for promoting the wider reuse of datasets, particularly in areas where the government already calls most of the shots, is a practical idea that other countries should consider as they develop their own comprehensive AI strategies.

The French strategy, drafted by mathematician and Member of Parliament Cédric Villani, calls for legislation to mandate repurposing both public and private sector data, including personal data, to enable public-interest uses of AI by government or others, depending on the sensitivity of the data. For example, public health services could use data generated by Internet of Things (IoT) devices to help doctors better treat and diagnose patients. Researchers could use data captured by motorway CCTV to train driverless cars. Energy distributors could manage peaks and troughs in demand using data from smart meters.

Repurposed data held by private companies could be made publicly available, shared with other companies, or processed securely by the public sector, depending on the extent to which sharing the data presents privacy risks or undermines competition. The report suggests that the government would not require companies to share data publicly when doing so would impact legitimate business interests, nor would it require that any personal data be made public. Instead, Dr. Villani argues that, if wider data sharing would do unreasonable damage to a company’s commercial interests, it may be appropriate to only give public authorities access to the data. But where the stakes are lower, companies could be required to share the data more widely, to maximize reuse. Villani rightly argues that it is virtually impossible to come up with generalizable rules for how data should be shared that would work across all sectors. Instead, he argues for a sector-specific approach to determining how and when data should be shared.

After making the case for state-mandated repurposing of data, the report goes on to highlight four key sectors as priorities: health, transport, the environment, and defense. Since these all have clear implications for the public interest, France can create national laws authorizing extensive repurposing of personal data without violating the General Data Protection Regulation (GDPR) which allows national laws that permit the repurposing of personal data where it serves the public interest. The French strategy is the first clear effort by an EU member state to proactively use this clause in aid of national efforts to bolster AI….(More)”.

Buzzwords and tortuous impact studies won’t fix a broken aid system


The Guardian: “Fifteen leading economists, including three Nobel winners, argue that the many billions of dollars spent on aid can do little to alleviate poverty while we fail to tackle its root causes….Donors increasingly want to see more impact for their money, practitioners are searching for ways to make their projects more effective, and politicians want more financial accountability behind aid budgets. One popular option has been to audit projects for results. The argument is that assessing “aid effectiveness” – a buzzword now ubiquitous in the UK’s Department for International Development – will help decide what to focus on.

Some go so far as to insist that development interventions should be subjected to the same kind of randomised control trials used in medicine, with “treatment” groups assessed against control groups. Such trials are being rolled out to evaluate the impact of a wide variety of projects – everything from water purification tablets to microcredit schemes, financial literacy classes to teachers’ performance bonuses.

Economist Esther Duflo at MIT’s Poverty Action Lab recently argued in Le Monde that France should adopt clinical trials as a guiding principle for its aid budget, which has grown significantly under the Macron administration.

But truly random sampling with blinded subjects is almost impossible in human communities without creating scenarios so abstract as to tell us little about the real world. And trials are expensive to carry out, and fraught with ethical challenges – especially when it comes to health-related interventions. (Who gets the treatment and who doesn’t?)

But the real problem with the “aid effectiveness” craze is that it narrows our focus down to micro-interventions at a local level that yield results that can be observed in the short term. At first glance this approach might seem reasonable and even beguiling. But it tends to ignore the broader macroeconomic, political and institutional drivers of impoverishment and underdevelopment. Aid projects might yield satisfying micro-results, but they generally do little to change the systems that produce the problems in the first place. What we need instead is to tackle the real root causes of poverty, inequality and climate change….(More)”.

Technology, Activism, and Social Justice in a Digital Age


Book edited by John G. McNutt: “…offers a close look at both the present nature and future prospects for social change. In particular, the text explores the cutting edge of technology and social change, while discussing developments in social media, civic technology, and leaderless organizations — as well as more traditional approaches to social change.

It effectively assembles a rich variety of perspectives to the issue of technology and social change; the featured authors are academics and practitioners (representing both new voices and experienced researchers) who share a common devotion to a future that is just, fair, and supportive of human potential.

They come from the fields of social work, public administration, journalism, law, philanthropy, urban affairs, planning, and education, and their work builds upon 30-plus years of research. The authors’ efforts to examine changing nature of social change organizations and the issues they face will help readers reflect upon modern advocacy, social change, and the potential to utilize technology in making a difference….(More)”

How Charities Are Using Artificial Intelligence to Boost Impact


Nicole Wallace at the Chronicle of Philanthropy: “The chaos and confusion of conflict often separate family members fleeing for safety. The nonprofit Refunite uses advanced technology to help loved ones reconnect, sometimes across continents and after years of separation.

Refugees register with the service by providing basic information — their name, age, birthplace, clan and subclan, and so forth — along with similar facts about the people they’re trying to find. Powerful algorithms search for possible matches among the more than 1.1 million individuals in the Refunite system. The analytics are further refined using the more than 2,000 searches that the refugees themselves do daily.

The goal: find loved ones or those connected to them who might help in the hunt. Since Refunite introduced the first version of the system in 2010, it has helped more than 40,000 people reconnect.

One factor complicating the work: Cultures define family lineage differently. Refunite co-founder Christopher Mikkelsen confronted this problem when he asked a boy in a refugee camp if he knew where his mother was. “He asked me, ‘Well, what mother do you mean?’ ” Mikkelsen remembers. “And I went, ‘Uh-huh, this is going to be challenging.’ ”

Fortunately, artificial intelligence is well suited to learn and recognize different family patterns. But the technology struggles with some simple things like distinguishing the image of a chicken from that of a car. Mikkelsen believes refugees in camps could offset this weakness by tagging photographs — “car” or “not car” — to help train algorithms. Such work could earn them badly needed cash: The group hopes to set up a system that pays refugees for doing such work.

“To an American, earning $4 a day just isn’t viable as a living,” Mikkelsen says. “But to the global poor, getting an access point to earning this is revolutionizing.”

Another group, Wild Me, a nonprofit created by scientists and technologists, has created an open-source software platform that combines artificial intelligence and image recognition, to identify and track individual animals. Using the system, scientists can better estimate the number of endangered animals and follow them over large expanses without using invasive techniques….

To fight sex trafficking, police officers often go undercover and interact with people trying to buy sex online. Sadly, demand is high, and there are never enough officers.

Enter Seattle Against Slavery. The nonprofit’s tech-savvy volunteers created chatbots designed to disrupt sex trafficking significantly. Using input from trafficking survivors and law-enforcement agencies, the bots can conduct simultaneous conversations with hundreds of people, engaging them in multiple, drawn-out conversations, and arranging rendezvous that don’t materialize. The group hopes to frustrate buyers so much that they give up their hunt for sex online….

A Philadelphia charity is using machine learning to adapt its services to clients’ needs.

Benefits Data Trust helps people enroll for government-assistance programs like food stamps and Medicaid. Since 2005, the group has helped more than 650,000 people access $7 billion in aid.

The nonprofit has data-sharing agreements with jurisdictions to access more than 40 lists of people who likely qualify for government benefits but do not receive them. The charity contacts those who might be eligible and encourages them to call the Benefits Data Trust for help applying….(More)”.

Small Wars, Big Data: The Information Revolution in Modern Conflict


Book by Eli Berman, Joseph H. Felter & Jacob N. Shapiro: “The way wars are fought has changed starkly over the past sixty years. International military campaigns used to play out between large armies at central fronts. Today’s conflicts find major powers facing rebel insurgencies that deploy elusive methods, from improvised explosives to terrorist attacks. Small Wars, Big Datapresents a transformative understanding of these contemporary confrontations and how they should be fought. The authors show that a revolution in the study of conflict–enabled by vast data, rich qualitative evidence, and modern methods—yields new insights into terrorism, civil wars, and foreign interventions. Modern warfare is not about struggles over territory but over people; civilians—and the information they might choose to provide—can turn the tide at critical junctures.

The authors draw practical lessons from the past two decades of conflict in locations ranging from Latin America and the Middle East to Central and Southeast Asia. Building an information-centric understanding of insurgencies, the authors examine the relationships between rebels, the government, and civilians. This approach serves as a springboard for exploring other aspects of modern conflict, including the suppression of rebel activity, the role of mobile communications networks, the links between aid and violence, and why conventional military methods might provide short-term success but undermine lasting peace. Ultimately the authors show how the stronger side can almost always win the villages, but why that does not guarantee winning the war.

Small Wars, Big Data provides groundbreaking perspectives for how small wars can be better strategized and favorably won to the benefit of the local population….(More)”.

Blockchain Ethical Design Framework


Report by Cara LaPointe and Lara Fishbane: “There are dramatic predictions about the potential of blockchain to “revolutionize” everything from worldwide financial markets and the distribution of humanitarian assistance to the very way that we outright recognize human identity for billions of people around the globe. Some dismiss these claims as excessive technology hype by citing flaws in the technology or robustness of incumbent solutions and infrastructure.

The reality will likely fall somewhere between these two extremes across multiple sectors. Where initial applications of blockchain were focused on the financial industry, current applications have rapidly expanded to address a wide array of sectors with major implications for social impact.

This paper aims to demonstrate the capacity of blockchain to create scalable social impact and to identify the elements that need to be addressed to mitigate challenges in its application. We are at a moment when technology is enabling society to experiment with new solutions and business models. Ubiquity and global reach, increased capabilities, and affordability have made technology a critical tool for solving problems, making this an exciting time to think about achieving greater social impact. We can address issues for underserved or marginalized people in ways that were previously unimaginable.

Blockchain is a technology that holds real promise for dealing with key inefficiencies and transforming operations in the social sector and for improving lives. Because of its immutability and decentralization, blockchain has the potential to create transparency, provide distributed verification, and build trust across multiple systems. For instance, blockchain applications could provide the means for establishing identities for individuals without identification papers, improving access to finance and banking services for underserved populations, and distributing aid to refugees in a more transparent and efficient manner. Similarly, national and subnational governments are putting land registry information onto blockchains to create greater transparency and avoid corruption and manipulation by third parties.

From increasing access to capital, to tracking health and education data across multiple generations, to improving voter records and voting systems, blockchain has countless potential applications for social impact. As developers take on building these types of solutions, the social effects of blockchain can be powerful and lasting. With the potential for such a powerful impact, the design, application, and approach to the development and implementation of blockchain technologies have long-term implications for society and individuals.

This paper outlines why intentionality of design, which is important with any technology, is particularly crucial with blockchain, and offers a framework to guide policymakers and social impact organizations. As social media, cryptocurrencies, and algorithms have shown, technology is not neutral. Values are embedded in the code. How the problem is defined and by whom, who is building the solution, how it gets programmed and implemented, who has access, and what rules are created have consequences, in intentional and unintentional ways. In the applications and implementation of blockchain, it is critical to understand that seemingly innocuous design choices have resounding ethical implications on people’s lives.

This white paper addresses why intentionality of design matters, identifies the key questions that should be asked, and provides a framework to approach use of blockchain, especially as it relates to social impact. It examines the key attributes of blockchain, its broad applicability as well as its particular potential for social impact, and the challenges in fully realizing that potential. Social impact organizations and policymakers have an obligation to understand the ethical approaches used in designing blockchain technology, especially how they affect marginalized and vulnerable populations….(More)”

Can Smart Cities Be Equitable?


Homi Kharas and Jaana Remes at Project Syndicate: “Around the world, governments are making cities “smarter” by using data and digital technology to build more efficient and livable urban environments. This makes sense: with urban populations growing and infrastructure under strain, smart cities will be better positioned to manage rapid change.

But as digital systems become more pervasive, there is a danger that inequality will deepen unless local governments recognize that tech-driven solutions are as important to the poor as they are to the affluent.

While offline populations can benefit from applications running in the background of daily life – such as intelligent signals that help with traffic flows – they will not have access to the full range of smart-city programs. With smartphones serving as the primary interface in the modern city, closing the digital divide, and extending access to networks and devices, is a critical first step.

City planners can also deploy technology in ways that make cities more inclusive for the poor, the disabled, the elderly, and other vulnerable people. Examples are already abundant.

In New York City, the Mayor’s Public Engagement Unit uses interagency data platforms to coordinate door-to-door outreachto residents in need of assistance. In California’s Santa Clara County, predictive analytics help prioritize shelter space for the homeless. On the London Underground, an app called Wayfindr uses Bluetooth to help visually impaired travelers navigate the Tube’s twisting pathways and escalators.

And in Kolkata, India, a Dublin-based startup called Addressing the Unaddressedhas used GPS to provide postal addresses for more than 120,000 slum dwellers in 14 informal communities. The goal is to give residents a legal means of obtaining biometric identification cards, essential documentation needed to access government services and register to vote.

But while these innovations are certainly significant, they are only a fraction of what is possible.

Public health is one area where small investments in technology can bring big benefits to marginalized groups. In the developing world, preventable illnesses comprise a disproportionate share of the disease burden. When data are used to identify demographic groups with elevated risk profiles, low-cost mobile-messaging campaigns can transmit vital prevention information. So-called “m-health” interventions on issues like vaccinations, safe sex, and pre- and post-natal care have been shown to improve health outcomes and lower health-care costs.

Another area ripe for innovation is the development of technologies that directly aid the elderly….(More)”.

Can crowdsourcing scale fact-checking up, up, up? Probably not, and here’s why


Mevan Babakar at NiemanLab: “We foolishly thought that harnessing the crowd was going to require fewer human resources, when in fact it required, at least at the micro level, more.”….There’s no end to the need for fact-checking, but fact-checking teams are usually small and struggle to keep up with the demand. In recent months, organizations like WikiTribune have suggested crowdsourcing as an attractive, low-cost way that fact-checking could scale.

As the head of automated fact-checking at the U.K.’s independent fact-checking organization Full Fact, I’ve had a lot of time to think about these suggestions, and I don’t believe that crowdsourcing can solve the fact-checking bottleneck. It might even make it worse. But — as two notable attempts, TruthSquad and FactcheckEU, have shown — even if crowdsourcing can’t help scale the core business of fact checking, it could help streamline activities that take place around it.

Think of crowdsourced fact-checking as including three components: speed (how quickly the task can be done), complexity (how difficult the task is to perform; how much oversight it needs), and coverage (the number of topics or areas that can be covered). You can optimize for (at most) two of these at a time; the third has to be sacrificed.

High-profile examples of crowdsourcing like Wikipedia, Quora, and Stack Overflow harness and gather collective knowledge, and have proven that large crowds can be used in meaningful ways for complex tasks across many topics. But the tradeoff is speed.

Projects like Gender Balance (which asks users to identify the gender of politicians) and Democracy Club Candidates (which crowdsources information about election candidates) have shown that small crowds can have a big effect when it comes to simple tasks, done quickly. But the tradeoff is broad coverage.

At Full Fact, during the 2015 U.K. general election, we had 120 volunteers aid our media monitoring operation. They looked through the entire media output every day and extracted the claims being made. The tradeoff here was that the task wasn’t very complex (it didn’t need oversight, and we only had to do a few spot checks).

But we do have two examples of projects that have operated at both high levels of complexity, within short timeframes, and across broad areas: TruthSquad and FactCheckEU….(More)”.