Following Fenno: Learning from Senate Candidates in the Age of Social Media and Party Polarization


David C.W. Parker  at The Forum: “Nearly 40 years ago, Richard Fenno published Home Style, a seminal volume explaining how members of Congress think about and engage in the process of representation. To accomplish his task, he observed members of Congress as they crafted and communicated their representational styles to the folks back home in their districts. The book, and Fenno’s ensuing research agenda, served as a clarion call to move beyond sophisticated quantitative analyses of roll call voting and elite interviews in Washington, D.C. to comprehend congressional representation. Instead, Fenno argued, political scientists are better served by going home with members of Congress where “their perceptions of their constituencies are shaped, sharpened, or altered” (Fenno 1978, p. xiii). These perceptions of constituencies fundamentally shape what members of Congress do at home and in Washington. If members of Congress are single-minded seekers of reelection, as we often assume, then political scientists must begin with the constituent relationship essential to winning reelection. Go home, Fenno says, to understand Congress.

There are many ways constituency relationships can be understood and uncovered; the preferred method for Fenno is participant observation, which he variously terms as “soaking and poking” or “just hanging around.” Although it sounds easy enough to sit and watch, good participant observation requires many considerations (as Fenno details in a thorough appendix to Home Style). In this appendix, and in another series of essays, Fenno grapples forthrightly with the tough choices researchers must consider when watching and learning from politicians.

In this essay, I respond to Fenno’s thought-provoking methodological treatise in Home Style and the ensuing collection of musings he published as Watching Politicians: Essays on Participant Observation. I do so for three reasons: First, I wish to reinforce Fenno’s call to action. As the study of political science has matured, it has moved away from engaging with politicians in the field across the various sub-fields, favoring statistical analyses. “Everyone cites Fenno, but no one does Fenno,” I recently opined, echoing another scholar commenting on Fenno’s work (Fenno 2013, p. 2; Parker 2015, p. 246). Unfortunately, that sentiment is supported by data (Grimmer 2013, pp. 13–19; Curry 2017). Although quantitative and formal analyses have led to important insights into the study of political behavior and institutions, politics is as important to our discipline as science. And in politics, the motives and concerns of people are important to witness, not just because they add complexity and richness to our stories, but because they aid in theory generation.1 Fenno’s study was exploratory, but is full of key theoretical insights relevant to explaining how members of Congress understand their constituencies and the ensuing political choices they make.

Second, to “do” participant observation requires understanding the choices the methodology imposes. This necessitates that those who practice this method of discovery document and share their experiences (Lin 2000). The more the prospective participant observer can understand the size of the choice set she faces and the potential consequences at each decision point in advance, the better her odds of avoiding unanticipated consequences with both immediate and long-term research ramifications. I hope that adding my cumulative experiences to this ongoing methodological conversation will assist in minimizing both unexpected and undesirable consequences for those who follow into the field. Fenno is open about his own choices, and the difficult decisions he faced as a participant observer. Encouraging scholars to engage in participant observation is only half the battle. The other half is to encourage interested scholars to think about those same choices and methodological considerations, while acknowledging that context precludes a one-size fits all approach. Fenno’s choices may not be your choices – and that might be just fine depending upon your circumstances. Fenno would wholeheartedly agree.

Finally, Congress and American politics have changed considerably from when Fenno embarked on his research in Home Style. At the end of his introduction, Fenno writes that “this book is about the early to mid-1970s only. These years were characterized by the steady decline of strong national party attachments and strong local party organizations. … Had these conditions been different, House members might have behaved differently in their constituencies” (xv). Developments since Fenno put down his pen include political parties polarizing to an almost unprecedented degree, partisan attachments strengthening among voters, and technology emerging to change fundamentally how politicians engage with constituents. In light of this evolution of political culture in Washington and at home, it is worth considering the consequences for the participant-observation research approach. Many have asked me if it is still possible to do such work in the current political environment, and if so, what are the challenges facing political scientists going into the field? This essay provides some answers.

I proceed as follows: First, I briefly discuss my own foray into the world of participant observation, which occurred during the 2012 Senate race in Montana. Second, I consider two important methodological considerations raised by Fenno: access and participation as an observer. Third, I relate these two issues to a final consideration: the development of social media and the consequences of this for the participant observation enterprise. Finally, I show the perils of social science divorced from context, as demonstrated by the recent Stanford-Dartmouth mailer scandal. I conclude with not just a plea for us to pick up where Fenno has left off, but by suggesting that more thinking like a participant observer would benefit the discipline as whole by reminding us of our ethical obligations as researchers to each other, and to the political community that we study…(More)”.

Origin Privacy: Protecting Privacy in the Big-Data Era


Paper by Helen Nissenbaum, Sebastian Benthall, Anupam Datta, Michael Carl Tschantz, and Piot Mardziel: “Machine learning over big data poses challenges for our conceptualization of privacy. Such techniques can discover surprising and counteractive associations that take innocent looking data and turns it into important inferences about a person. For example, the buying carbon monoxide monitors has been linked to paying credit card bills, while buying chrome-skull car accessories predicts not doing so. Also, Target may have used the buying of scent-free hand lotion and vitamins as a sign that the buyer is pregnant. If we take pregnancy status to be private and assume that we should prohibit the sharing information that can reveal that fact, then we have created an unworkable notion of privacy, one in which sharing any scrap of data may violate privacy.

Prior technical specifications of privacy depend on the classification of certain types of information as private or sensitive; privacy policies in these frameworks limit access to data that allow inference of this sensitive information. As the above examples show, today’s data rich world creates a new kind of problem: it is difficult if not impossible to guarantee that information does notallow inference of sensitive topics. This makes information flow rules based on information topic unstable.

We address the problem of providing a workable definition of private data that takes into account emerging threats to privacy from large-scale data collection systems. We build on Contextual Integrity and its claim that privacy is appropriate information flow, or flow according to socially or legally specified rules.

As in other adaptations of Contextual Integrity (CI) to computer science, the parameterization of social norms in CI is translated into a logical specification. In this work, we depart from CI by considering rules that restrict information flow based on its origin and provenance, instead of on it’s type, topic, or subject.

We call this concept of privacy as adherence to origin-based rules Origin Privacy. Origin Privacy rules can be found in some existing data protection laws. This motivates the computational implementation of origin-based rules for the simple purpose of compliance engineering. We also formally model origin privacy to determine what security properties it guarantees relative to the concerns that motivate it….(More)”.

Online Bettors Can Sniff Out Weak Psychology Studies


Ed Yong at the Atlantic: “Psychologists are in the midst of an ongoing, difficult reckoning. Many believe that their field is experiencing a “reproducibility crisis,” because they’ve tried and failed to repeat experiments done by their peers. Even classic results—the stuff of textbooks and TED talks—have proven surprisingly hard to replicate, perhaps because they’re the results of poor methods and statistical tomfoolery. These problems have spawned a community of researchers dedicated to improving the practices of their field and forging a more reliable way of doing science.

These attempts at reform have met resistance. Critics have argued that the so-called crisis is nothing of the sort, and that researchers who have failed to repeat past experiments were variously incompetentprejudiced, or acting in bad faith.

But if those critiques are correct, then why is it that scientists seem to be remarkably good at predicting which studies in psychology and other social sciences will replicate, and which will not?

Consider the new results from the Social Sciences Replication Project, in which 24 researchers attempted to replicate social-science studies published between 2010 and 2015 in Nature and Science—the world’s top two scientific journals. The replicators ran much bigger versions of the original studies, recruiting around five times as many volunteers as before. They did all their work in the open, and ran their plans past the teams behind the original experiments. And ultimately, they could only reproduce the results of 13 out of 21 studies—62 percent.

As it turned out, that finding was entirely predictable. While the SSRP team was doing their experimental re-runs, they also ran a “prediction market”—a stock exchange in which volunteers could buy or sell “shares” in the 21 studies, based on how reproducible they seemed. They recruited 206 volunteers—a mix of psychologists and economists, students and professors, none of whom were involved in the SSRP itself. Each started with $100 and could earn more by correctly betting on studies that eventually panned out.

At the start of the market, shares for every study cost $0.50 each. As trading continued, those prices soared and dipped depending on the traders’ activities. And after two weeks, the final price reflected the traders’ collective view on the odds that each study would successfully replicate. So, for example, a stock price of $0.87 would mean a study had an 87 percent chance of replicating. Overall, the traders thought that studies in the market would replicate 63 percent of the time—a figure that was uncannily close to the actual 62-percent success rate.

The traders’ instincts were also unfailingly sound when it came to individual studies. Look at the graph below. The market assigned higher odds of success for the 13 studies that were successfully replicated than the eight that weren’t—compare the blue diamonds to the yellow diamonds….(More)”.

Why Proven Solutions Struggle to Scale Up


Kriss Deiglmeier & Amanda Greco at Stanford Social Innovation Review: “…As we applied the innovation continuum to the cases we studied, we identified barriers to scale that often trap social innovations in a stagnation chasm before they achieve diffusion and scaling.

 

Three barriers in particular repeatedly block social innovations from reaching their broadest impact: inadequate funds for growth, the fragmented nature of the social innovation ecosystem, and talent gaps. If we are serious about propelling proven social innovations to achieve widespread impact, we must find solutions that overcome each of these barriers. The rest of the article will explore in more detail each of these three barriers in turn.

1. Inadequate Funding

Social innovators face a convoluted and often elusive path to mobilize the resources needed to amplify the impact of their work. Of the strategies for scale in Mulgan’s typology, some are very capital intensive, others less so. Yet even the advocacy and network approaches to scaling social impact require resources. It takes time, funding, and expertise to navigate the relationships and complex interdependencies that are critical to success. Some ventures may benefit from earned revenue streams that provide funds for growth, but earned revenue isn’t guaranteed in the social innovation space, especially for innovations that operate where markets fail to meet needs and serve people with no ability to pay. Thus, external funding is usually needed in order to scale impact, whether from donors or from investors depending on the legal structure and financial prospects of the venture….

2. A Fragmented Ecosystem

One sector toiling in isolation or digging into an adversarial approach cannot achieve breakthrough scale on its own. Instead, engaging and coordinating actions across various actors from the private, nonprofit, and public sectors is critical. In the case of microfinance, for example, the innovation garnered interest from government and business when nonprofits like Grameen Bank had demonstrated success in providing financial services to formerly unbanked people.

Following the pioneering role of nonprofits to establish proof of concept, commercial banks entered the market, with mixed social outcomes, given the pressure they faced for profitability. As the microfinance industry matured, governments created a legal and regulatory environment that encouraged transparency, market entry, and competition. The cumulative efforts and engagement across the nonprofit, private, and public sectors were critical to scaling microfinance as we know it today and will continue to refine the approach for better social outcomes in the future…

3. The Talent Gap

To drive social innovations in a world of rapid change, organizations need talented leaders supported by effective teams. The insufficient funding and fragmented ecosystem require highly adept people to shepherd social innovations through the long journey to widespread social impact. Unfortunately, attracting and retaining people to navigate these complexities is a challenge…(More)”.

Winners Take All


Book by Anand Giridharadas: “… takes us into the inner sanctums of a new gilded age, where the rich and powerful fight for equality and justice any way they can–except ways that threaten the social order and their position atop it. We see how they rebrand themselves as saviors of the poor; how they lavishly reward “thought leaders” who redefine “change” in winner-friendly ways; and how they constantly seek to do more good, but never less harm. We hear the limousine confessions of a celebrated foundation boss; witness an American president hem and haw about his plutocratic benefactors; and attend a cruise-ship conference where entrepreneurs celebrate their own self-interested magnanimity.

Giridharadas asks hard questions: Why, for example, should our gravest problems be solved by the unelected upper crust instead of the public institutions it erodes by lobbying and dodging taxes? He also points toward an answer: Rather than rely on scraps from the winners, we must take on the grueling democratic work of building more robust, egalitarian institutions and truly changing the world. A call to action for elites and everyday citizens alike….(More)”.

Better ways to measure the new economy


Valerie Hellinghausen and Evan Absher at Kauffman Foundation: “The old measure of “jobs numbers” as an economic indicator is shifting to new metrics to measure a new economy.

With more communities embracing inclusive entrepreneurial ecosystems as the new model of economic development, entrepreneurs, ecosystem builders, and government agencies – at all levels – need to work together on data-driven initiatives. While established measures still have a place, new metrics have the potential to deliver the timely and granular information that is more useful at the local level….

Three better ways to measure the new economy:

  1. National and local datasets:Numbers used to discuss the economy are national level and usually not very timely. These numbers are useful to understand large trends, but fail to capture local realities. One way to better measure local economies is to use local administrative datasets. There are many obstacles with this approach, but the idea is gaining interest. Data infrastructure, policies, and projects are building connections between local and national agencies. Joining different levels of government data will provide national scale and local specificity.
  1. Private and public data:The words private and public typically reflect privacy issues, but there is another public and private dimension. Public institutions possess vast amounts of data, but so do private companies. For instance, sites like PayPal, Square, Amazon, and Etsy possess data that could provide real-time assessment of an individual company’s financial health. The concept of credit and risk could be expanded to benefit those currently underserved, if combined with local administrative information like tax, wage, and banking data. Fair and open use of private data could open credit to currently underfunded entrepreneurs.
  1. New metrics:Developing connections between different datasets will result in new metrics of entrepreneurial activity: metrics that measure human connection, social capital, community creativity, and quality of life. Metrics that capture economic activity at the community level and in real time. For example, the Kauffman Foundation has funded research that uses labor data from private job-listing sites to better understand the match between the workforce entrepreneurs need and the workforce available within the immediate community. But new metrics are not enough, they must connect to the final goal of economic independence. Using new metrics to help ecosystems understand how policies and programs impact entrepreneurship is the final step to measuring local economies….(More)”.

The effects of ICT use and ICT Laws on corruption: A general deterrence theory perspective


Anol Bhattacherjee and Utkarsh Shrivastava in Government Information Quarterly: “Investigations of white collar crimes such as corruption are often hindered by the lack of information or physical evidence. Information and communication technologies (ICT), by virtue of their ability to monitor, track, record, analyze, and share vast amounts of information may help countries identify and prosecute criminals, and deter future corruption. While prior studies have demonstrated that ICT is an important tool in reducing corruption at the country level, they provide little explanation as to how ICT influences corruption and when does it work best.

We explore these gaps in the literature using the hypothetico-deductive approach to research, by using general deterrence theory to postulate a series of main and moderating effects relating ICT use and corruption, and then testing those effects using secondary data analysis. Our analysis suggests that ICT use influences corruption by increasing the certainty and celerity of punishment related to corruption. Moreover, ICT laws moderate the effect of ICT use on corruption, suggesting that ICT investments may have limited effect on corruption, unless complemented with appropriate ICT laws. Implications of our findings for research and practice are discussed….(More)”.

Behavioural science and policy: where are we now and where are we going?


Michael Sanders et al in Behavioral Public Policy: “The use of behavioural sciences in government has expanded and matured in the last decade. Since the Behavioural Insights Team (BIT) has been part of this movement, we sketch out the history of the team and the current state of behavioural public policy, recognising that other works have already told this story in detail. We then set out two clusters of issues that have emerged from our work at BIT. The first cluster concerns current challenges facing behavioural public policy: the long-term effects of interventions; repeated exposure effects; problems with proxy measures; spillovers and general equilibrium effects and unintended consequences; cultural variation; ‘reverse impact’; and the replication crisis. The second cluster concerns opportunities: influencing the behaviour of government itself; scaling interventions; social diffusion; nudging organisations; and dealing with thorny problems. We conclude that the field will need to address these challenges and take these opportunities in order to realise the full potential of behavioural public policy….(More)”.

Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable


Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.

The Risks of Dangerous Dashboards in Basic Education


Lant Pritchett at the Center for Global Development: “On June 1, 2009 Air France flight 447 from Rio de Janeiro to Paris crashed into the Atlantic Ocean killing all 228 people on board. While the Airbus 330 was flying on auto-pilot, the different speed indicators received by the on-board navigation computers started to give conflicting speeds, almost certainly because the pitot tubes responsible for measuring air speed had iced over. Since the auto-pilot could not resolve conflicting signals and hence did not know how fast the plane was actually going, it turned control of the plane over to the two first officers (the captain was out of the cockpit). Subsequent flight simulator trials replicating the conditions of the flight conclude that had the pilots done nothing at all everyone would have lived—nothing was actually wrong; only the indicators were faulty, not the actual speed. But, tragically, the pilots didn’t do nothing….

What is the connection to education?

Many countries’ systems of basic education are in “stall” condition.

A recent paper of Beatty et al. (2018) uses information from the Indonesia Family Life Survey, a representative household survey that has been carried out in several waves with the same individuals since 2000 and contains information on whether individuals can answer simple arithmetic questions. Figure 1, showing the relationship between the level of schooling and the probability of answering a typical question correctly, has two shocking results.

First, the difference in the likelihood a person can answer a simple mathematics question correctly differs by only 20 percent between individuals who have completed less than primary school (<PS)—who can answer correctly (adjusted for guessing) about 20 percent of the time—and those who have completed senior secondary school or more (>=SSS), who answer correctly only about 40 percent of the time. These are simple multiple choice questions like whether 56/84 is the same fraction as (can be reduced to) 2/3, and whether 1/3-1/6 equals 1/6. This means that in an entire year of schooling, less than 2 additional children per 100 gain the ability to answer simple arithmetic questions.

Second, this incredibly poor performance in 2000 got worse by 2014. …

What has this got to do with education dashboards? The way large bureaucracies prefer to work is to specify process compliance and inputs and then measure those as a means of driving performance. This logistical mode of managing an organization works best when both process compliance and inputs are easily “observable” in the economist’s sense of easily verifiable, contractible, adjudicated. This leads to attention to processes and inputs that are “thin” in the Clifford Geertz sense (adopted by James Scott as his primary definition of how a “high modern” bureaucracy and hence the state “sees” the world). So in education one would specify easily-observable inputs like textbook availability, class size, school infrastructure. Even if one were talking about “quality” of schooling, a large bureaucracy would want this too reduced to “thin” indicators, like the fraction of teachers with a given type of formal degree, or process compliance measures, like whether teachers were hired based on some formal assessment.

Those involved in schooling can then become obsessed with their dashboards and the “thin” progress that is being tracked and easily ignore the loud warning signals saying: Stall!…(More)”.