Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity


Paper by Gordon Pennycook and David G. Rand: “Inaccurate beliefs pose a threat to democracy and fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. Here we investigate the cognitive psychological profile of individuals who fall prey to fake news. We find a consistent positive correlation between the propensity to think analytically – as measured by the Cognitive Reflection Test (CRT) – and the ability to differentiate fake news from real news (“media truth discernment”). This was true regardless of whether the article’s source was indicated (which, surprisingly, also had no main effect on accuracy judgments). Contrary to the motivated reasoning account, CRT was just as positively correlated with media truth discernment, if not more so, for headlines that aligned with individuals’ political ideology relative to those that were politically discordant. The link between analytic thinking and media truth discernment was driven both by a negative correlation between CRT and perceptions of fake news accuracy (particularly among Hillary Clinton supporters), and a positive correlation between CRT and perceptions of real news accuracy (particularly among Donald Trump supporters). This suggests that factors that undermine the legitimacy of traditional news media may exacerbate the problem of inaccurate political beliefs among Trump supporters, who engaged in less analytic thinking and were overall less able to discern fake from real news (regardless of the news’ political valence). We also found consistent evidence that pseudo-profound bullshit receptivity negatively correlates with perceptions of fake news accuracy; a correlation that is mediated by analytic thinking. Finally, analytic thinking was associated with an unwillingness to share both fake and real news on social media. Our results indicate that the propensity to think analytically plays an important role in the recognition of misinformation, regardless of political valence – a finding that opens up potential avenues for fighting fake news….(More)”.

From Katrina To Harvey: How Disaster Relief Is Evolving With Technology


Cale Guthrie Weissman at Fast Company: “Open data may sound like a nerdy thing, but this weekend has proven it’s also a lifesaver in more ways than one.

As Hurricane Harvey pelted the southern coast of Texas, a local open-data resource helped provide accurate and up-to-date information to the state’s residents. Inside Harris County’s intricate bayou system–intended to both collect water and effectively drain it–gauges were installed to sense when water is overflowing. The sensors transmit the data to a website, which has become a vital go-to for Houston residents….

This open access to flood gauges is just one of the many ways new tech-driven projects have helped improve responses to disasters over the years. “There’s no question that technology has played a much more significant role,” says Lemaitre, “since even Hurricane Sandy.”

While Sandy was noted in 2012 for its ability to connect people with Twitter hashtags and other relatively nascent social apps like Instagram, the last few years have brought a paradigm shift in terms of how emergency relief organizations integrate technology into their responses….

Social media isn’t just for the residents. Local and national agencies–including FEMA–rely on this information and are using it to help create faster and more effective disaster responses. Following the disaster with Hurricane Katrina, FEMA worked over the last decade to revamp its culture and methods for reacting to these sorts of situations. “You’re seeing the federal government adapt pretty quickly,” says Lemaitre.

There are a few examples of this. For instance, FEMA now has an app to push necessary information about disaster preparedness. The agency also employs people to cull the open web for information that would help make its efforts better and more effective. These “social listeners” look at all the available Facebook, Snapchat, and other social media posts in aggregate. Crews are brought on during disasters to gather intelligence, and then report about areas that need relief efforts–getting “the right information to the right people,” says Lemaitre.

There’s also been a change in how this information is used. Often, when disasters are predicted, people send supplies to the affected areas as a way to try and help out. Yet they don’t know exactly where they should send it, and local organizations sometimes become inundated. This creates a huge logistical nightmare for relief organizations that are sitting on thousands of blankets and tarps in one place when they should be actively dispersing them across hundreds of miles.

“Before, you would just have a deluge of things dropped on top of a disaster that weren’t particularly helpful at times,” says Lemaitre. Now people are using sites like Facebook to ask where they should direct the supplies. For example, after a bad flood in Louisiana last year, a woman announced she had food and other necessities on Facebook and was able to direct the supplies to an area in need. This, says Lemaitre, is “the most effective way.”

Put together, Lemaitre has seen agencies evolve with technology to help create better systems for quicker disaster relief. This has also created a culture of learning updates and reacting in real time. Meanwhile, more data is becoming open, which is helping both people and agencies alike. (The National Weather Service, which has long trumpeted its open data for all, has become a revered stalwart for such information, and has already proven indispensable in Houston.)

Most important, the pace of technology has caused organizations to change their own procedures. Twelve years ago, during Katrina, the protocol was to wait until an assessment before deploying any assistance. Now organizations like FEMA know that just doesn’t work. “You can’t afford to lose time,” says Lemaitre. “Deploy as much as you can and be fast about it–you can always scale back.”

It’s important to note that, even with rapid technological improvements, there’s no way to compare one disaster response to another–it’s simply not apples to apples. All the same, organizations are still learning about where they should be looking and how to react, connecting people to their local communities when they need them most….(More)”.

From ‘Opening Up’ to Democratic Renewal: Deepening Public Engagement in Legislative Committees


Carolyn M. Hendriks and Adrian Kay in Government and Opposition: “Many legislatures around the world are undergoing a ‘participatory makeover’. Parliaments are hosting open days and communicating the latest parliamentary updates via websites and social media. Public activities such as these may make parliaments more informative and accessible, but much more could be done to foster meaningful democratic renewal. In particular, participatory efforts ought to be engaging citizens in a central task of legislatures – to deliberate and make decisions on collective issues. In this article, the potential of parliamentary committees to bring the public closer to legislative deliberations is considered. Drawing on insights from the practice and theory of deliberative democracy, the article discusses why and how deeper and more inclusive forms of public engagement can strengthen the epistemic, representative and deliberative capacities of parliamentary committees. Practical examples are considered to illustrate the possibilities and challenges of broadening public involvement in committee work….(More)”

Crowdsourcing the Charlottesville Investigation


Internet sleuths got to work, and by Monday morning they were naming names and calling for arrests.

The name of the helmeted man went viral after New York Daily News columnist Shaun King posted a series of photos on Twitter and Facebook that more clearly showed his face and connected him to photos from a Facebook account. “Neck moles gave it away,” King wrote in his posts, which were shared more than 77,000 times. But the name of the red-bearded assailant was less clear: some on Twitter claimed it was a Texas man who goes by a Nordic alias online. Others were sure it was a Michigan man who, according to Facebook, attended high school with other white nationalist demonstrators depicted in photos from Charlottesville.

After being contacted for comment by The Marshall Project, the Michigan man removed his Facebook page from public view.

Such speculation, especially when it is not conclusive, has created new challenges for law enforcement. There is the obvious risk of false identification. In 2013, internet users wrongly identified university student Sunil Tripathi as a suspect in the Boston marathon bombing, prompting the internet forum Reddit to issue an apology for fostering “online witch hunts.” Already, an Arkansas professor was misidentified as as a torch-bearing protester, though not a criminal suspect, at the Charlottesville rallies.

Beyond the cost to misidentified suspects, the crowdsourced identification of criminal suspects is both a benefit and burden to investigators.

“If someone says: ‘hey, I have a picture of someone assaulting another person, and committing a hate crime,’ that’s great,” said Sgt. Sean Whitcomb, the spokesman for the Seattle Police Department, which used social media to help identify the pilot of a drone that crashed into a 2015 Pride Parade. (The man was convicted in January.) “But saying, ‘I am pretty sure that this person is so and so’. Well, ‘pretty sure’ is not going to cut it.”

Still, credible information can help police establish probable cause, which means they can ask a judge to sign off on either a search warrant, an arrest warrant, or both….(More)“.

Inside the Lab That’s Quantifying Happiness


Rowan Jacobsen at Outside: “In Mississippi, people tweet about cake and cookies an awful lot; in Colorado, it’s noodles. In Mississippi, the most-tweeted activity is eating; in Colorado, it’s running, skiing, hiking, snowboarding, and biking, in that order. In other words, the two states fall on opposite ends of the behavior spectrum. If you were to assign a caloric value to every food mentioned in every tweet by the citizens of the United States and a calories-burned value to every activity, and then totaled them up, you would find that Colorado tweets the best caloric ratio in the country and Mississippi the worst.

Sure, you’d be forgiven for doubting people’s honesty on Twitter. On those rare occasions when I destroy an entire pint of Ben and Jerry’s, I most assuredly do not tweet about it. Likewise, I don’t reach for my phone every time I strap on a pair of skis.

And yet there’s this: Mississippi has the worst rate of diabetes and heart disease in the country and Colorado has the best. Mississippi has the second-highest percentage of obesity; Colorado has the lowest. Mississippi has the worst life expectancy in the country; Colorado is near the top. Perhaps we are being more honest on social media than we think. And perhaps social media has more to tell us about the state of the country than we realize.

That’s the proposition of Peter Dodds and Chris Danforth, who co-direct the University of Vermont’s Computational Story Lab, a warren of whiteboards and grad students in a handsome brick building near the shores of Lake Champlain. Dodds and Danforth are applied mathematicians, but they would make a pretty good comedy duo. When I stopped by the lab recently, both were in running clothes and cracking jokes. They have an abundance of curls between them and the wiry energy of chronic thinkers. They came to UVM in 2006 to start the Vermont Complex Systems Center, which crunches big numbers from big systems and looks for patterns. Out of that, they hatched the Computational Story Lab, which sifts through some of that public data to discern the stories we’re telling ourselves. “It took us a while to come up with the name,” Dodds told me as we shotgunned espresso and gazed into his MacBook. “We were going to be the Department of Recreational Truth.”

This year, they teamed up with their PhD student Andy Reagan to launch the Lexicocalorimeter, an online tool that uses tweets to compute the calories in and calories out for every state. It’s no mere party trick; the Story Labbers believe the Lexicocalorimeter has important advantages over slower, more traditional methods of gathering health data….(More)”.

Artificial Intelligence for Citizen Services and Government


Paper by Hila Mehr: “From online services like Netflix and Facebook, to chatbots on our phones and in our homes like Siri and Alexa, we are beginning to interact with artificial intelligence (AI) on a near daily basis. AI is the programming or training of a computer to do tasks typically reserved for human intelligence, whether it is recommending which movie to watch next or answering technical questions. Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services.

While the potential future use cases of AI in government remain bounded by government resources and the limits of both human creativity and trust in government, the most obvious and immediately beneficial opportunities are those where AI can reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks. Many AI case studies in citizen services today fall into five categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. These applications could make government work more efficient while freeing up time for employees to build better relationships with citizens. With citizen satisfaction with digital government offerings leaving much to be desired, AI may be one way to bridge the gap while improving citizen engagement and service delivery.

Despite the clear opportunities, AI will not solve systemic problems in government, and could potentially exacerbate issues around service delivery, privacy, and ethics if not implemented thoughtfully and strategically. Agencies interested in implementing AI can learn from previous government transformation efforts, as well as private-sector implementation of AI. Government offices should consider these six strategies for applying AI to their work: make AI a part of a goals-based, citizen-centric program; get citizen input; build upon existing resources; be data-prepared and tread carefully with privacy; mitigate ethical risks and avoid AI decision making; and, augment employees, do not replace them.

This paper explores the various types of AI applications, and current and future uses of AI in government delivery of citizen services, with a focus on citizen inquiries and information. It also offers strategies for governments as they consider implementing AI….(More)”

Rise of the Government Chatbot


Zack Quaintance at Government Technology: “A robot uprising has begun, except instead of overthrowing mankind so as to usher in a bleak yet efficient age of cold judgment and colder steel, this uprising is one of friendly robots (so far).

Which is all an alarming way to say that many state, county and municipal governments across the country have begun to deploy relatively simple chatbots, aimed at helping users get more out of online public services such as a city’s website, pothole reporting and open data. These chatbots have been installed in recent months in a diverse range of places including Kansas City, Mo.; North Charleston, S.C.; and Los Angeles — and by many indications, there is an accompanying wave of civic tech companies that are offering this tech to the public sector.

They range from simple to complex in scope, and most of the jurisdictions currently using them say they are doing so on somewhat of a trial or experimental basis. That’s certainly the case in Kansas City, where the city now has a Facebook chatbot to help users get more out of its open data portal.

“The idea was never to create a final chatbot that was super intelligent and amazing,” said Eric Roche, Kansas City’s chief data officer. “The idea was let’s put together a good effort, and put it out there and see if people find it interesting. If they use it, get some lessons learned and then figure out — either in our city, or with developers, or with people like me in other cities, other chief data officers and such — and talk about the future of this platform.”

Roche developed Kansas City’s chatbot earlier this year by working after hours with Code for Kansas City, the local Code for America brigade — and he did so because since in the four-plus years the city’s open data program has been active, there have been regular concerns that the info available through it was hard to navigate, search and use for average citizens who aren’t data scientists and don’t work for the city (a common issue currently being addressed by many jurisdictions). The idea behind the Facebook chatbot is that Roche can program it with a host of answers to the most prevalent questions, enabling it to both help interested users and save him time for other work….

In North Charleston, S.C., the city has adopted a text-based chatbot, which goes above common 311-style interfaces by allowing users to report potholes or any other lapses in city services they may notice. It also allows them to ask questions, which it subsequently answers by crawling city websites and replying with relevant links, said Ryan Johnson, the city’s public relations coordinator.

North Charleston has done this by partnering with a local tech startup that has deep roots in the area’s local government. The company is called Citibot …

With Citibot, residents can report a pothole at 2 a.m., or they can get info about street signs or trash pickup sent right to their phones.

There are also more complex chatbot technologies taking hold at both the civic and state levels, in Los Angeles and Mississippi, to be exact.

Mississippi’s chatbot is called Missi, and its capabilities are vast and nuanced. Residents can even use it for help submitting online payments. It’s accessible by clicking a small chat icon on the side of the website.

Back in May, Los Angeles rolled out Chip, or City Hall Internet Personality, on the Los Angeles Business Assistance Virtual Network. The chatbot aims to assist visitors by operating as a 24/7 digital assistant for visitors to the site, helping them navigate it and better understand its services by answering their inquiries. It is capable of presenting info from anywhere on the site, and it can even go so far as helping users fill out forms or set up email alerts….(More)”

The Nudging Divide in the Digital Big Data Era


Julia M. Puaschunder in the International Robotics & Automation Journal: “Since the end of the 1970ies a wide range of psychological, economic and sociological laboratory and field experiments proved human beings deviating from rational choices and standard neo-classical profit maximization axioms to fail to explain how human actually behave. Behavioral economists proposed to nudge and wink citizens to make better choices for them with many different applications. While the motivation behind nudging appears as a noble endeavor to foster peoples’ lives around the world in very many different applications, the nudging approach raises questions of social hierarchy and class division. The motivating force of the nudgital society may open a gate of exploitation of the populace and – based on privacy infringements – stripping them involuntarily from their own decision power in the shadow of legally-permitted libertarian paternalism and under the cloak of the noble goal of welfare-improving global governance. Nudging enables nudgers to plunder the simple uneducated citizen, who is neither aware of the nudging strategies nor able to oversee the tactics used by the nudgers.

The nudgers are thereby legally protected by democratically assigned positions they hold or by outsourcing strategies used, in which social media plays a crucial rule. Social media forces are captured as unfolding a class dividing nudgital society, in which the provider of social communication tools can reap surplus value from the information shared of social media users. The social media provider thereby becomes a capitalist-industrialist, who benefits from the information shared by social media users, or so-called consumer-workers, who share private information in their wish to interact with friends and communicate to public. The social media capitalist-industrialist reaps surplus value from the social media consumer-workers’ information sharing, which stems from nudging social media users. For one, social media space can be sold to marketers who can constantly penetrate the consumer-worker in a subliminal way with advertisements. But also nudging occurs as the big data compiled about the social media consumer-worker can be resold to marketers and technocrats to draw inferences about consumer choices, contemporary market trends or individual personality cues used for governance control, such as, for instance, border protection and tax compliance purposes.

The law of motion of the nudging societies holds an unequal concentration of power of those who have access to compiled data and who abuse their position under the cloak of hidden persuasion and in the shadow of paternalism. In the nudgital society, information, education and differing social classes determine who the nudgers and who the nudged are. Humans end in different silos or bubbles that differ in who has power and control and who is deceived and being ruled. The owners of the means of governance are able to reap a surplus value in a hidden persuasion, protected by the legal vacuum to curb libertarian paternalism, in the moral shadow of the unnoticeable guidance and under the cloak of the presumption that some know what is more rational than others. All these features lead to an unprecedented contemporary class struggle between the nudgers (those who nudge) and the nudged (those who are nudged), who are divided by the implicit means of governance in the digital scenery. In this light, governing our common welfare through deceptive means and outsourced governance on social media appears critical. In combination with the underlying assumption of the nudgers knowing better what is right, just and fair within society, the digital age and social media tools hold potential unprecedented ethical challenges….(More)”

Using Social Media To Predict the Future: A Systematic Literature Review


Review by Lawrence Phillips, Chase Dowling, Kyle Shaffer, Nathan Hodas and Svitlana Volkov: “Social media (SM) data provides a vast record of humanity’s everyday thoughts, feelings, and actions at a resolution previously unimaginable. Because user behavior on SM is a reflection of events in the real world, researchers have realized they can use SM in order to forecast, making predictions about the future. The advantage of SM data is its relative ease of acquisition, large quantity, and ability to capture socially relevant information, which may be difficult to gather from other data sources. Promising results exist across a wide variety of domains, but one will find little consensus regarding best practices in either methodology or evaluation. In this systematic review, we examine relevant literature over the past decade, tabulate mixed results across a number of scientific disciplines, and identify common pitfalls and best practices. We find that SM forecasting is limited by data biases, noisy data, lack of generalizable results, a lack of domain-specific theory, and underlying complexity in many prediction tasks. But despite these shortcomings, recurring findings and promising results continue to galvanize researchers and demand continued investigation. Based on the existing literature, we identify research practices which lead to success, citing specific examples in each case and making recommendations for best practices. These recommendations will help researchers take advantage of the exciting possibilities offered by SM platforms….(More)”

#WhereIsMyName ?


Mujb Mashal the New York Times: “These are some of the terms Afghan men use to refer to their wives in public instead of their names, the sharing of which they see as a grave dishonor worthy of violence: Mother of Children, My Household, My Weak One or sometimes, in far corners, My Goat or My Chicken.

Women also may be called Milk-sharer or Black-headed. The go-to word for Afghans to call a woman in public, no matter her status, is Aunt.

But a social media campaign to change this custom has been percolating in recent weeks, initiated by young women. The campaign comes with a hashtag in local languages that addresses the core of the issue and translates as #WhereIsMyName.

The activists’ aim is both to challenge women to reclaim their most basic identity, and to break the deep-rooted taboo that prevents men from mentioning their female relatives’ names in public….

Like many social media efforts, this one began small, with several posts out of Herat Province in the west. Since then, more activists have tried to turn it into a topic of conversation by challenging celebrities and government officials to share the names of their wives and mothers.

The discussion has now made it to the regular media, with articles in newspapers and conversations on television and radio talk shows.

Members of the Parliament, senior government officials and artists have come forward in support, publicly declaring the identities of the female members of their families….(More)”