Duncan Watts in The Bridge: “The past 15 years have witnessed a remarkable increase in both the scale and scope of social and behavioral data available to researchers. Over the same period, and driven by the same explosion in data, the study of social phenomena has increasingly become the province of computer scientists, physicists, and other “hard” scientists. Papers on social networks and related topics appear routinely in top science journals and computer science conferences; network science research centers and institutes are sprouting up at top universities; and funding agencies from DARPA to NSF have moved quickly to embrace what is being called computational social science.
Against these exciting developments stands a stubborn fact: in spite of many thousands of published papers, there’s been surprisingly little progress on the “big” questions that motivated the field of computational social science—questions concerning systemic risk in financial systems, problem solving in complex organizations, and the dynamics of epidemics or social movements, among others.
Of the many reasons for this state of affairs, I concentrate here on three. First, social science problems are almost always more difficult than they seem. Second, the data required to address many problems of interest to social scientists remain difficult to assemble. And third, thorough exploration of complex social problems often requires the complementary application of multiple research traditions—statistical modeling and simulation, social and economic theory, lab experiments, surveys, ethnographic fieldwork, historical or archival research, and practical experience—many of which will be unfamiliar to any one researcher. In addition to explaining the particulars of these challenges, I sketch out some ideas for addressing them….”
New Journal Helps Behavioral Scientists Find Their Way to Washington
The PsychReport: “When it comes to being heard in Washington, classical economists have long gotten their way. Behavioral scientists, on the other hand, haven’t proved so adept at getting their message across.
It isn’t for lack of good ideas. Psychology’s applicability has been gaining momentum in recent years, namely in the U.K.’s Behavioral Insights Team, which has helped prove the discipline’s worth to policy makers. The recent (but not-yet-official) announcement that the White House is creating a similar team is another major endorsement of behavioral science’s value.
But when it comes to communicating those ideas to the public in general, psychologists and other behavioral scientists can’t name so many successes. Part of the problem is PR know-how: writing for a general audience, publicizing good ideas, reaching-out to decision makers. Another is incentive: academics need to publish, and many times publishing means producing long, dense, jargon-laden articles for peer-reviewed journals read by a rarified audience of other academics. And then there’s time, or lack of it.
But a small group of prominent behavioral scientists is working to help other researchers find their way to Washington. The brainchild of UCLA’s Craig Fox and Duke’s Sim Sitkin, Behavioral Science & Policy is a peer-reviewed journal set to launch online this fall and in print early next year, whose mission is to influence policy and practice through promoting high-quality behavioral science research. Articles will be brief, well written, and will all provide straightforward, applicable policy recommendations that serve the public interest.
“What we’re trying to do is create policies that are mindful of how individuals, groups, and organizations behave. How can you create smart policies if you don’t do that?”
In bringing behavioral science to the capital, Fox echoed a similar motivation as David Halpern of the Behavioral Insights Team.
“What we’re trying to do is create policies that are mindful of how individuals, groups, and organizations behave. How can you create smart policies if you don’t do that?” Fox said. “Because after all, all policies affect individuals, groups, and/or organizations.”
Fox has already assembled an impressive team of scientists from around the country for the journal’s advisory board including Richard Thaler and Cass Sunstein, authors of Nudge which helped inspire the creation of the Behavioral Insights Team, The New York Times columnist David Brooks, and Nobel Prize Winner Daniel Kahneman. They’ve created a strong partnership with the prestigious think tank Brookings Institute, who will serve as their publishing partner and who they plan will also co-host briefings for policy makers in Washington…”
The Parable of Google Flu: Traps in Big Data Analysis
David Lazer: “…big data last winter had its “Dewey beats Truman” moment, when the poster child of big data (at least for behavioral data), Google Flu Trends (GFT), went way off the rails in “nowcasting” the flu–overshooting the peak last winter by 130% (and indeed, it has been systematically overshooting by wide margins for 3 years). Tomorrow we (Ryan Kennedy, Alessandro Vespignani, and Gary King) have a paper out in Science dissecting why GFT went off the rails, how that could have been prevented, and the broader lessons to be learned regarding big data.
[We are The Parable of Google Flu (WP-Final).pdf we submitted before acceptance. We have also posted an SSRN paper evaluating GFT for 2013-14, since it was reworked in the Fall.]Key lessons that I’d highlight:
1) Big data are typically not scientifically calibrated. This goes back to my post last month regarding measurement. This does not make them useless from a scientific point of view, but you do need to build into the analysis that the “measures” of behavior are being affected by unseen things. In this case, the likely culprit was the Google search algorithm, which was modified in various ways that we believe likely to have increased flu related searches.
2) Big data + analytic code used in scientific venues with scientific claims need to be more transparent. This is a tricky issue, because there are both legitimate proprietary interests involved and privacy concerns, but much more can be done in this regard than has been done in the 3 GFT papers. [One of my aspirations over the next year is to work together with big data companies, researchers, and privacy advocates to figure out how this can be done.]
3) It’s about the questions, not the size of the data. In this particular case, one could have done a better job stating the likely flu prevalence today by ignoring GFT altogether and just project 3 week old CDC data to today (better still would have been to combine the two). That is, a synthesis would have been more effective than a pure “big data” approach. I think this is likely the general pattern.
4) More generally, I’d note that there is much more that the academy needs to do. First, the academy needs to build the foundation for collaborations around big data (e.g., secure infrastructures, legal understandings around data sharing, etc). Second, there needs to be MUCH more work done to build bridges between the computer scientists who work on big data and social scientists who think about deriving insights about human behavior from data more generally. We have moved perhaps 5% of the way that we need to in this regard.”
Participatory Budgeting Platform
Hollie Gilman: “Stanford’s Social Algorithm’s Lab SOAL has built an interactive Participatory Budgeting Platform that allows users to simulate budgetary decision making on $1 million dollars of public monies. The center brings together economics, computer science, and networking to work on problems and understand the impact of social networking. This project is part of Stanford’s Widescope Project to enable people to make political decisions on the budgets through data driven social networks.
The Participatory Budgeting simulation highlights the fourth annual Participatory Budgeting in Chicago’s 49th ward — the first place to implement PB in the U.S. This year $1 million, out of $1.3 million in Alderman capital funds, will be allocated through participatory budgeting.
One goal of the platform is to build consensus. The interactive geo-spatial mapping software enables citizens to more intuitively identify projects in a given area. Importantly, the platform forces users to make tough choices and balance competing priorities in real time.
The platform is an interesting example of a collaborative governance prototype that could be transformative in its ability to engage citizens with easily accessible mapping software.”
New Research Network to Study and Design Innovative Ways of Solving Public Problems
MacArthur Foundation Research Network on Opening Governance formed to gather evidence and develop new designs for governing
NEW YORK, NY, March 4, 2014 – The Governance Lab (The GovLab) at New York University today announced the formation of a Research Network on Opening Governance, which will seek to develop blueprints for more effective and legitimate democratic institutions to help improve people’s lives.
Convened and organized by the GovLab, the MacArthur Foundation Research Network on Opening Governance is made possible by a three-year grant of $5 million from the John D. and Catherine T. MacArthur Foundation as well as a gift from Google.org, which will allow the Network to tap the latest technological advances to further its work.
Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people. Network members include twelve experts (see below) in computer science, political science, policy informatics, social psychology and philosophy, law, and communications. This core group is supported by an advisory network of academics, technologists, and current and former government officials. Together, they will assess existing innovations in governing and experiment with new practices and how institutions make decisions at the local, national, and international levels.
Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.
The Network’s research will be complemented by theoretical writing and compelling storytelling designed to articulate and demonstrate clearly and concretely how governing agencies might work better than they do today. “We want to arm policymakers and practitioners with evidence of what works and what does not,” says Professor Beth Simone Noveck, Network Chair and author of Wiki Government: How Technology Can Make Government Better, Democracy Stronger and Citi More Powerful, “which is vital to drive innovation, re-establish legitimacy and more effectively target scarce resources to solve today’s problems.”
“From prize-backed challenges to spur creative thinking to the use of expert networks to get the smartest people focused on a problem no matter where they work, this shift from top-down, closed, and professional government to decentralized, open, and smarter governance may be the major social innovation of the 21st century,” says Noveck. “The MacArthur Research Network on Opening Governance is the ideal crucible for helping transition from closed and centralized to open and collaborative institutions of governance in a way that is scientifically sound and yields new insights to inform future efforts, always with an eye toward real-world impacts.”
MacArthur Foundation President Robert Gallucci added, “Recognizing that we cannot solve today’s challenges with yesterday’s tools, this interdisciplinary group will bring fresh thinking to questions about how our governing institutions operate, and how they can develop better ways to help address seemingly intractable social problems for the common good.”
Members
The MacArthur Research Network on Opening Governance comprises:
Chair: Beth Simone Noveck
Network Coordinator: Andrew Young
Chief of Research: Stefaan Verhulst
Faculty Members:
- Sir Tim Berners-Lee (Massachusetts Institute of Technology (MIT)/University of Southampton, UK)
- Deborah Estrin (Cornell Tech/Weill Cornell Medical College)
- Erik Johnston (Arizona State University)
- Henry Farrell (George Washington University)
- Sheena S. Iyengar (Columbia Business School/Jerome A. Chazen Institute of International Business)
- Karim Lakhani (Harvard Business School)
- Anita McGahan (University of Toronto)
- Cosma Shalizi (Carnegie Mellon/Santa Fe Institute)
Institutional Members:
- Christian Bason and Jesper Christiansen (MindLab, Denmark)
- Geoff Mulgan (National Endowment for Science Technology and the Arts – NESTA, United Kingdom)
- Lee Rainie (Pew Research Center)
The Network is eager to hear from and engage with the public as it undertakes its work. Please contact Stefaan Verhulst to share your ideas or identify opportunities to collaborate.”
Coordinating the Commons: Diversity & Dynamics in Open Collaborations
Dissertation by Jonathan T. Morgan: “The success of Wikipedia demonstrates that open collaboration can be an effective model for organizing geographically-distributed volunteers to perform complex, sustained work at a massive scale. However, Wikipedia’s history also demonstrates some of the challenges that large, long-term open collaborations face: the core community of Wikipedia editors—the volunteers who contribute most of the encyclopedia’s content and ensure that articles are correct and consistent — has been gradually shrinking since 2007, in part because Wikipedia’s social climate has become increasingly inhospitable for newcomers, female editors, and editors from other underrepresented demographics. Previous research studies of change over time within other work contexts, such as corporations, suggests that incremental processes such as bureaucratic formalization can make organizations more rule-bound and less adaptable — in effect, less open— as they grow and age. There has been little research on how open collaborations like Wikipedia change over time, and on the impact of those changes on the social dynamics of the collaborating community and the way community members prioritize and perform work. Learning from Wikipedia’s successes and failures can help researchers and designers understand how to support open collaborations in other domains — such as Free/Libre Open Source Software, Citizen Science, and Citizen Journalism.
True Collective Intelligence? A Sketch of a Possible New Field
Paper by Geoff Mulgan in Philosophy & Technology :” Collective intelligence is much talked about but remains very underdeveloped as a field. There are small pockets in computer science and psychology and fragments in other fields, ranging from economics to biology. New networks and social media also provide a rich source of emerging evidence. However, there are surprisingly few useable theories, and many of the fashionable claims have not stood up to scrutiny. The field of analysis should be how intelligence is organised at large scale—in organisations, cities, nations and networks. The paper sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to the possible intellectual barriers to progress.”
Predicting Individual Behavior with Social Networks
Article by Sharad Goel and Daniel Goldstein (Microsoft Research): “With the availability of social network data, it has become possible to relate the behavior of individuals to that of their acquaintances on a large scale. Although the similarity of connected individuals is well established, it is unclear whether behavioral predictions based on social data are more accurate than those arising from current marketing practices. We employ a communications network of over 100 million people to forecast highly diverse behaviors, from patronizing an off-line department store to responding to advertising to joining a recreational league. Across all domains, we find that social data are informative in identifying individuals who are most likely to undertake various actions, and moreover, such data improve on both demographic and behavioral models. There are, however, limits to the utility of social data. In particular, when rich transactional data were available, social data did little to improve prediction.”
Trust, Computing, and Society
New book edited by Richard H. R. Harper: “The Internet has altered how people engage with each other in myriad ways, including offering opportunities for people to act distrustfully. This fascinating set of essays explores the question of trust in computing from technical, socio-philosophical, and design perspectives. Why has the identity of the human user been taken for granted in the design of the Internet? What difficulties ensue when it is understood that security systems can never be perfect? What role does trust have in society in general? How is trust to be understood when trying to describe activities as part of a user requirement program? What questions of trust arise in a time when data analytics are meant to offer new insights into user behavior and when users are confronted with different sorts of digital entities? These questions and their answers are of paramount interest to computer scientists, sociologists, philosophers, and designers confronting the problem of trust.
- Brings together authors from a variety of disciplines
- Can be adopted in multiple course areas: computer science, philosophy, sociology, anthropology
- Integrated, multidisciplinary approach to understanding trust as it relates to modern computing”
Table of Contents
Table of Contents
1. Introduction and overview Richard Harper
Part I. The Topography of Trust and Computing:
2. The role of trust in cyberspace David Clark
3. The new face of the internet Thomas Karagiannis
4. Trust as a methodological tool in security engineering George Danezis
Part II. Conceptual Points of View:
5. Computing and the search for trust Tom Simpson
6. The worry about trust Olli Lagerspetz
7. The inescapability of trust Bob Anderson and Wes Sharrock
8. Trust in interpersonal interaction and cloud computing Rod Watson
9. Trust, social identity, and computation Charles Ess
Part III. Trust in Design:
10. Design for trusted and trustworthy services M. Angela Sasse and Iacovos Kirlappos
11. Dialogues: trust in design Richard Banks
12. Trusting oneself Richard Harper and William Odom
13. Reflections on trust, computing and society Richard Harper
Bibliography.
The Web at 25 in the U.S.
Paper by Lee Rainie and Susannah Fox from Pew: “The overall verdict: The internet has been a plus for society and an especially good thing for individual users… This report is the first part of a sustained effort through 2014 by the Pew Research Center to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-Lee. Lee wrote a paper on March 12, 1989 proposing an “information management” system that became the conceptual and architectural structure for the Web. He eventually released the code for his system—for free—to the world on Christmas Day in 1990. It became a milestone in easing the way for ordinary people to access documents and interact over a network of computers called the internet—a system that linked computers and that had been around for years. The Web became especially appealing after Web browsers were perfected in the early 1990s to facilitate graphical displays of pages on those linked computers.”