Is Social Media Killing Democracy?


Phil Howard at Culture Digitally: “This is the big year for computational propaganda—using immense data sets to manipulate public opinion over social media.  Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits. 

Platforms like Twitter and Facebook now provide a structure for our political lives.  We’ve always relied on many kinds of sources for our political news and information.  Family, friends, news organizations, charismatic politicians certainly predate the internet.  But whereas those are sources of information, social media now provides the structure for political conversation.  And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.

First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends.  …

Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”…

The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to….

Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends.  Whereas journalists and social scientists feel an obligation to openly analyze and discuss public preferences, we do not expect this of Facebook.  The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook.  When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda.  The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies.  Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work.  And we should expect them to deliberate about their editorial decisions.

There are some ways to fix these problems.  Opaque software algorithms shape what people find in their news feeds.  We’ve all noticed fake news stories, often called clickbait, and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion.  These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US Presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump.  Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention. 

So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms….(More)”.

The Cost of Cooperating


David Rand: “…If you think about the puzzle of cooperation being “why should I incur a personal cost of time or money or effort in order to do something that’s going to benefit other people and not me?” the general answer is that if you can create future consequences for present behavior, that can create an incentive to cooperate. Cooperation requires me to incur some costs now, but if I’m cooperating with someone who I’ll interact with again, it’s worth it for me to pay the cost of cooperating now in order to get the benefit of them cooperating with me in the future, as long as there’s a large enough likelihood that we’ll interact again.

Even if it’s with someone that I’m not going to interact with again, if other people are observing that interaction, then it affects my reputation. It can be worth paying the cost of cooperating in order to earn a good reputation, and to attract new interaction partners.

There’s a lot of evidence to show that this works. There are game theory models and computer simulations showing that if you build these kinds of future consequences, you can get either evolution to lead to cooperative agents dominating populations, and also learning and strategic reasoning leading to people cooperating. There are also lots of behavioral experiments supporting this. These are lab experiments where you bring people into the lab, give them money, and you have them engage in economic cooperation games where they choose whether to keep the money for themselves or to contribute it to a group project that benefits other people. If you make it so that future consequences exist in any of these various ways, it makes people more inclined to cooperate. Typically, it leads to cooperation paying off, and being the best-performing strategy.

In these situations, it’s not altruistic to be cooperative because the interactions are designed in a way that makes cooperating pay off. For example, we have a paper that shows that in the context of repeated interactions, there’s not any relationship between how altruistic people are and how much they cooperate. Basically, everybody cooperates, even the selfish people. Under certain situations, selfish people can even wind up cooperating more because they’re better at identifying that that’s what is going to pay off.

This general class of solutions to the cooperation problem boils down to creating future consequences, and therefore creating a self-interested motivation in the long run to be cooperative. Strategic cooperation is extremely important; it explains a lot of real-world cooperation. From an institution design perspective, it’s important for people to be thinking about how you set up the rules of interaction—interaction structures and incentive structures—in a way that makes working for the greater good a good strategy.

At the same time that this strategic cooperation is important, it’s also clearly the case that people often cooperate even when there’s not a self-interested motive to do so. That willingness to help strangers (or to not exploit them) is a core piece of well-functioning societies. It makes society much more efficient when you don’t constantly have to be watching your back, afraid that people are going to take advantage of you. If you can generally trust that other people are going to do the right thing and you’re going to do the right thing, it makes life much more socially efficient.

Strategic incentives can motivate people to cooperate, but people also keep cooperating even when there are not incentives to do so, at least to some extent. What motivates people to do that? The way behavioral economists and psychologists talk about that is at a proximate psychological level—saying things like, “Well, it feels good to cooperate with other people. You care about others and that’s why you’re willing to pay costs to help them. You have social preferences.” …

Most people, both in the scientific world and among laypeople, are of the former opinion, which is that we are by default selfish—we have to use rational deliberation to make ourselves do the right thing. I try to think about this question from a theoretical principle position and say, what should it be? From a perspective of either evolution or strategic reasoning, which of these two stories makes more sense, and should we expect to observe?

If you think about it that way, the key question is “where do our intuitive defaults come from?” There’s all this work in behavioral economics and psychology on heuristics and biases which suggests that these intuitions are usually rules of thumb for the behavior that typically works well. It makes sense: If you’re going to have something as your default, what should you choose as your default? You should choose the thing that works well in general. In any particular situation, you might stop and ask, “Does my rule of thumb fit this specific situation?” If not, then you can override it….(More)”

Beyond nudging: it’s time for a second generation of behaviourally-informed social policy


Katherine Curchin at LSE Blog: “…behavioural scientists are calling for a second generation of behaviourally-informed policy. In some policy areas, nudges simply aren’t enough. Behavioural research shows stronger action is required to attack the underlying cause of problems. For example, many scholars have argued that behavioural insights provide a rationale for regulation to protect consumers from manipulation by private sector companies. But what might a second generation of behaviourally-informed social policy look like?

Behavioural insights could provide a justification to change the trajectory of income support policy. Since the 1990s policy attention has focused on the moral character of benefits recipients. Inspired by Lawrence Mead’s paternalist philosophy, governments have tried to increase the resolve of the unemployed to work their way out of poverty. More and more behavioural requirements have been attached to benefits to motivate people to fulfil their obligations to society.

But behavioural research now suggests that these harsh policies are misguided. Behavioural science supports the idea that people often make poor decisions and do things which are not in their long term interests.  But the weakness of individuals’ moral constitution isn’t so much the problem as the unequal distribution of opportunities in society. There are circumstances in which humans are unlikely to flourish no matter how motivated they are.

Normal human psychological limitations – our limited cognitive capacity, limited attention and limited self-control – interact with environment to produce the behaviour that advocates of harsh welfare regimes attribute to permissive welfare. In their book Scarcity, Sendhil Mullainathan and Eldar Shafir argue that the experience of deprivation creates a mindset that makes it harder to process information, pay attention, make good decisions, plan for the future, and resist temptations.

Importantly, behavioural scientists have demonstrated that this mindset can be temporarily created in the laboratory by placing subjects in artificial situations which induce the feeling of not having enough. As a consequence, experimental subjects from middle-class backgrounds suddenly display the short-term thinking and irrational decision making often attributed to a culture of poverty.

Tying inadequate income support to a list of behavioural conditions will most punish those who are suffering most. Empirical studies of welfare conditionality have found that benefit claimants often do not comprehend the complicated rules that apply to them. Some are being punished for lack of understanding rather than deliberate non-compliance.

Behavioural insights can be used to mount a case for a more generous, less punitive approach to income support. The starting point is to acknowledge that some of Mead’s psychological assumptions have turned out to be wrong. The nature of the cognitive machinery humans share imposes limits on how self-disciplined and conscientious we can reasonably expect people living in adverse circumstances to be. We have placed too much emphasis on personal responsibility in recent decades. Why should people internalize the consequences of their behaviour when this behaviour is to a large extent the product of their environment?…(More)”

Service Design Impact Report: Public Sector


SDN: “In our study we have identified five different areas that are relevant for service design: policy making, cultural and organizational change, training and capacity building, citizens engagement and digitization.

Service design is taking a role in “policy creation”. Not only does it bring in-depth insights in the needs and constraints of citizens that help to design policies that really work for citizens – it also enables and facilitates processes of co-creation with different stakeholders. Policies are perceived as a piece of design work that is in a constant development and they are made by people for people.

Service design is also taking a role in the process of cultural and organizational change. It collaborates with other experts in this field in order to enable change by reframing the challenges, by engaging stakeholders in development of scenarios of futures that do not yet exist and by prototyping envisioned scenarios. These processes change the role of public servants from experts to partners. It is no longer the public service that is doing something for the citizens but doing it with them.

This new way of thinking and working demands not only a change in mindset, but also in the way of doing things. Service design helps to build these new capacities. Very often it is a combination of teaching and learning by doing, in the process of capacity building small service design projects can be approached that create a sense of what service design can do and how to do it.

In this sentence service design works along with existing practices of citizens engagement and enriches them by the design approach. People are no longer victims of circumstances but creators of environments.

Very often we find that the digitalization of public services is the entrance door for designers. So enabling designers to expand their capacities and showcase how service design does not only polish the bits and bytes but really changes the way we live and work….(Full report)”

Making a success of digital government


New report by the Institute for Government (UK): “Making a Success of Digital Government, says that after five years of getting more services online, government is hitting a wall. But despite some public services still running on last century’s computers, the real barrier to progress is not technology but the lack of political drive from the top.

Filling your tax return should be as easy as doing online banking. The technology exists. But outdated practices and policies mean that, in the case of many government services, we are still filling in and posting off forms to be manually processed. Taking digital government to the next level – which the report says would satisfy citizens and potentially save billions in the process – requires leadership. Civil servants need to improve their skills, old systems need to be overhauled, and policies need to be updated.

Daniel Thornton, report author, said:

“Tinkering around the edges of digital government has taken us only so far – now we need a fundamental change in the government’s approach. The starting point is recognising that digital is not just for geeks anymore – everyone in government must work to make it a success. There are huge potential savings to be made if the Government gets this right – which makes it all the more disappointing that the PM and Chancellor have not been as explicit about their commitment to digital government as their predecessors.”…(More)”

The People’s Code – Now on Code.gov


Tony Scott at the White House: “Today we’re launching Code.gov so that our Nation can continue to unlock the tremendous potential of the Federal Government’s software.

Over the past few years, we’ve taken unprecedented action to help Americans engage with their Government in new and meaningful ways.

Using Vote.gov, citizens can now quickly navigate their state’s voter registration process through an easy-to-use site. Veterans can go to Vets.gov to discover, apply for, track and manage their benefits in one, user-friendly place. And for the first time ever, citizens can send a note to President Obama simply by messaging the White House on Facebook.

By harnessing 21st Century technology and innovation, we’re improving the Federal Government’s ability to provide better citizen-centered services and are making the Federal Government smarter, savvier, and more effective for the American people. At the same time, we’re building many of these new digital tools, such as We the People, the White House Facebook bot, and Data.gov, in the open so that as the Government uses technology to re-imagine and improve the way people interact with it, others can too.

The code for these platforms is, after all, the People’s Code – and today we’re excited to announce that it’ll be accessible from one place, Code.gov, for the American people to explore, improve, and innovate.

The launch of Code.gov comes on the heels of the release of the Federal Source Code Policy, which seeks to further improve access to the Federal Government’s custom-developed software. It’s a step we took to help Federal agencies avoid duplicative custom software purchases and promote innovation and cross-agency collaboration. And it’s a step we took to enable the brightest minds inside and outside of government to work together to ensure that Federal code is reliable and effective.

Built in the open, the newly-launched Code.gov already boasts access to nearly 50 open source projects from over 10 agencies – and we expect this number to grow over the coming months as agencies work to implement the Federal Source Code Policy. Further, Code.gov will provide useful tools and best practices to help agencies implement the new policy. For example, starting today agencies can begin populating their enterprise code inventories using the metadata schema on Code.gov, discover various methods on how to build successful open source projects, and much more….(More)”

A Practical Guide for Harnessing the Power of Data


How does it do that? In a word: data.

Using a series of surveys and evaluations, Repair learned that once people participate in two volunteer opportunities, they’re more likely to continue volunteering regularly. Repair has used that and other findings to inform its operations and strategy, and to accelerate its work to encourage individuals to make an enduring commitment to public service.

Many purpose-driven organizations like Repair the World are committing more brainpower, time, and money to gathering data, and nonprofit and foundation professionals alike are recognizing the importance of that effort.

And yet there is a difference between just having data and using it well. Recent surveys have found that 94 percent of nonprofit professionals felt they were not using data effectively, and that 75 percent of foundation professionals felt that evaluations conducted by and submitted to grant makers did not provide any meaningful insights.

To remedy this, the Charles and Lynn Schusterman Family Foundation (one of Repair the World’s donors) developed the Data Playbook, a new tool to help more organizations harness the power of data to make smarter decisions, gain insights, and accelerate progress….

In the purpose-driven sector, our work is critically important for shaping lives and strengthening communities. Now is the time for all of us to commit to using the data at our fingertips to advance the broad range of causes we work on — education, health care, leadership development, social-justice work, and much more…

We are in this together. Let’s get started. (More)”

Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms


 at Freedom to Tinker: “The advent of social apps, smart phones and ubiquitous computing has brought a great transformation to our day-to-day life. The incredible pace with which the new and disruptive services continue to emerge challenges our perception of privacy. To keep apace with this rapidly evolving cyber reality, we need to devise agile methods and frameworks for developing privacy-preserving systems that align with evolving user’s privacy expectations.

Previous efforts have tackled this with the assumption that privacy norms are provided through existing sources such law, privacy regulations and legal precedents. They have focused on formally expressing privacy norms and devising a corresponding logic to enable automatic inconsistency checks and efficient enforcement of the logic.

However, because many of the existing regulations and privacy handbooks were enacted well before the Internet revolution took place, they often lag behind and do not adequately reflect the application of logic in modern systems. For example, the Family Rights and Privacy Act (FERPA) was enacted in 1974, long before Facebook, Google and many other online applications were used in an educational context. More recent legislation faces similar challenges as novel services introduce new ways to exchange information, and consequently shape new, unconsidered information flows that can change our collective perception of privacy.

Crowdsourcing Contextual Privacy Norms

Armed with the theory of Contextual Integrity (CI) in our work, we are exploring ways to uncover societal norms by leveraging the advances in crowdsourcing technology.

In our recent paper, we present the methodology that we believe can be used to extract a societal notion of privacy expectations. The results can be used to fine tune the existing privacy guidelines as well as get a better perspective on the users’ expectations of privacy.

CI defines privacy as collection of norms (privacy rules) that reflect appropriate information flows between different actors. Norms capture who shares what, with whom, in what role, and under which conditions. For example, while you are comfortable sharing your medical information with your doctor, you might be less inclined to do so with your colleagues.

We use CI as a proxy to reason about privacy in the digital world and a gateway to understanding how people perceive privacy in a systematic way. Crowdsourcing is a great tool for this method. We are able to ask hundreds of people how they feel about a particular information flow, and then we can capture their input and map it directly onto the CI parameters. We used a simple template to write Yes-or-No questions to ask our crowdsourcing participants:

“Is it acceptable for the [sender] to share the [subject’s] [attribute] with [recipient] [transmission principle]?”

For example:

“Is it acceptable for the student’s professor to share the student’s record of attendance with the department chair if the student is performing poorly? ”

In our experiments, we leveraged Amazon’s Mechanical Turk (AMT) to ask 450 turkers over 1400 such questions. Each question represents a specific contextual information flow that users can approve, disapprove or mark under the Doesn’t Make Sense category; the last category could be used when 1) the sender is unlikely to have the information, 2) the receiver would already have the information, or 3) the question is ambiguous….(More)”

Civic Crowd Analytics: Making sense of crowdsourced civic input with big data tools


Paper by  that: “… examines the impact of crowdsourcing on a policymaking process by using a novel data analytics tool called Civic CrowdAnalytics, applying Natural Language Processing (NLP) methods such as concept extraction, word association and sentiment analysis. By drawing on data from a crowdsourced urban planning process in the City of Palo Alto in California, we examine the influence of civic input on the city’s Comprehensive City Plan update. The findings show that the impact of citizens’ voices depends on the volume and the tone of their demands. A higher demand with a stronger tone results in more policy changes. We also found an interesting and unexpected result: the city government in Palo Alto mirrors more or less the online crowd’s voice while citizen representatives rather filter than mirror the crowd’s will. While NLP methods show promise in making the analysis of the crowdsourced input more efficient, there are several issues. The accuracy rates should be improved. Furthermore, there is still considerable amount of human work in training the algorithm….(More)”

Essays on collective intelligence


Thesis by Yiftach Nagar: “This dissertation consists of three essays that advance our understanding of collective-intelligence: how it works, how it can be used, and how it can be augmented. I combine theoretical and empirical work, spanning qualitative inquiry, lab experiments, and design, exploring how novel ways of organizing, enabled by advancements in information technology, can help us work better, innovate, and solve complex problems.

The first essay offers a collective sensemaking model to explain structurational processes in online communities. I draw upon Weick’s model of sensemaking as committed-interpretation, which I ground in a qualitative inquiry into Wikipedia’s policy discussion pages, in attempt to explain how structuration emerges as interpretations are negotiated, and then committed through conversation. I argue that the wiki environment provides conditions that help commitments form, strengthen and diffuse, and that this, in turn, helps explain trends of stabilization observed in previous research.

In the second essay, we characterize a class of semi-structured prediction problems, where patterns are difficult to discern, data are difficult to quantify, and changes occur unexpectedly. Making correct predictions under these conditions can be extremely difficult, and is often associated with high stakes. We argue that in these settings, combining predictions from humans and models can outperform predictions made by groups of people, or computers. In laboratory experiments, we combined human and machine predictions, and find the combined predictions more accurate and more robust than predictions made by groups of only people or only machines.

The third essay addresses a critical bottleneck in open-innovation systems: reviewing and selecting the best submissions, in settings where submissions are complex intellectual artifacts whose evaluation require expertise. To aid expert reviewers, we offer a computational approach we developed and tested using data from the Climate CoLab – a large citizen science platform. Our models approximate expert decisions about the submissions with high accuracy, and their use can save review labor, and accelerate the review process….(More)”