We Feel: Taking the emotional pulse of the world.


We Feel is a project that explores whether social media – specifically Twitter – can provide an accurate, real-time signal of the world’s emotional state….Hundreds of millions of tweets are posted every day. A huge topic of conversation is, of course, the authors; what they are up to, what they have encountered, and how they feel about it.

We Feel is about tapping that signal to better understand the prevalence and drivers of emotions. We hope it can uncover, for example, where people are most at risk of depression and how the mood and emotions of an area/region fluctuate over time. It could also help understand questions such as how strongly our emotions depend on social, economic and environmental factors such as the weather, time of day, day of the week, news of a major disaster or a downturn in the economy.

Whilst there is already a wealth of academic research on mental health and wellbeing, such as the Black Dog Index, this information is traditionally gathered by surveys and isn’t a real-time indication of what’s happening day to day. The traditional approach is time consuming and expensive. Twitter offers a large and fast sample of information that could hold the key to a real-time view of our emotions….

See also: Milne, D., Paris, C., Christensen, H., Batterham, P. and O’Dea, B. (2015) We Feel: Taking the emotional pulse of the world. In the Proceedings of the 19th Triennial Congress of the International Ergonomics Association (IEA 2015), Melbourne, Victoria, Australia, August 2015.”

The Routledge Companion to Social Media and Politics


Book edited by Axel Bruns, Gunn Enli, Eli Skogerbo, Anders Olof Larsson, Christian Christensen: “Social media are now widely used for political protests, campaigns, and communication in developed and developing nations, but available research has not yet paid sufficient attention to experiences beyond the US and UK. This collection tackles this imbalance head-on, compiling cutting-edge research across six continents to provide a comprehensive, global, up-to-date review of recent political uses of social media.

Drawing together empirical analyses of the use of social media by political movements and in national and regional elections and referenda, The Routledge Companion to Social Media and Politics presents studies ranging from Anonymous and the Arab Spring to the Greek Aganaktismenoi, and from South Korean presidential elections to the Scottish independence referendum. The book is framed by a selection of keystone theoretical contributions, evaluating and updating existing frameworks for the social media age….(More)”

Privacy in Public Spaces: What Expectations of Privacy Do We Have in Social Media Intelligence?


Paper by Edwards, Lilian and Urquhart, Lachlan: “In this paper we give a basic introduction to the transition in contemporary surveillance from top down traditional police surveillance to profiling and “pre-crime” methods. We then review in more detail the rise of open source (OSINT) and social media (SOCMINT) intelligence and its use by law enforcement and security authorities. Following this we consider what if any privacy protection is currently given in UK law to SOCMINT. Given the largely negative response to the above question, we analyse what reasonable expectations of privacy there may be for users of public social media, with reference to existing case law on art 8 of the ECHR. Two factors are in particular argued to be supportive of a reasonable expectation of privacy in open public social media communications: first, the failure of many social network users to perceive the environment where they communicate as “public”; and secondly, the impact of search engines (and other automated analytics) on traditional conceptions of structured dossiers as most problematic for state surveillance. Lastly, we conclude that existing law does not provide adequate protection foropen SOCMINT and that this will be increasingly significant as more and more personal data is disclosed and collected in public without well-defined expectations of privacy….(More)”

OpenAI won’t benefit humanity without data-sharing


 at the Guardian: “There is a common misconception about what drives the digital-intelligence revolution. People seem to have the idea that artificial intelligence researchers are directly programming an intelligence; telling it what to do and how to react. There is also the belief that when we interact with this intelligence we are processed by an “algorithm” – one that is subject to the whims of the designer and encodes his or her prejudices.

OpenAI, a new non-profit artificial intelligence company that was founded on Friday, wants to develop digital intelligence that will benefit humanity. By sharing its sentient algorithms with all, the venture, backed by a host of Silicon Valley billionaires, including Elon Musk and Peter Thiel, wants to avoid theexistential risks associated with the technology.

OpenAI’s launch announcement was timed to coincide with this year’s Neural Information Processing Systems conference: the main academic outlet for scientific advances in machine learning, which I chaired. Machine learning is the technology that underpins the new generation of AI breakthroughs.

One of OpenAI’s main ideas is to collaborate openly, publishing code and papers. This is admirable and the wider community is already excited by what the company could achieve.

OpenAI is not the first company to target digital intelligence, and certainly not the first to publish code and papers. Both Facebook and Google have already shared code. They were also present at the same conference. All three companies hosted parties with open bars, aiming to entice the latest and brightest minds.

However, the way machine learning works means that making algorithms available isn’t necessarily as useful as one might think. A machine- learning algorithm is subtly different from popular perception.

Just as in baking we don’t have control over how the cake will emerge from the oven, in machine learning we don’t control every decision that the computer will make. In machine learning the quality of the ingredients, the quality of the data provided, has a massive impact on the intelligence that is produced.

For intelligent decision-making the recipe needs to be carefully applied to the data: this is the process we refer to as learning. The result is the combination of our data and the recipe. We need both to make predictions.

By sharing their algorithms, Facebook and Google are merely sharing the recipe. Someone has to provide the eggs and flour and provide the baking facilities (which in Google and Facebook’s case are vast data-computation facilities, often located near hydroelectric power stations for cheaper electricity).

So even before they start, an open question for OpenAI is how will it ensure it has access to the data on the necessary scale to make progress?…(More)”

Data Science ethics


Gov.uk blog: “If Tesco knows day-to-day how poorly the nation is, how can Government access  similar  insights so it can better plan health services? If Airbnb can give you a tailored service depending on your tastes, how can Government provide people with the right support to help them back into work in a way that is right for them? If companies are routinely using social media data to get feedback from their customers to improve their services, how can Government also use publicly available data to do the same?

Data science allows us to use new types of data and powerful tools to analyse this more quickly and more objectively than any human could. It can put us in the vanguard of policymaking – revealing new insights that leads to better and more tailored interventions. And  it can help reduce costs, freeing up resource to spend on more serious cases.

But some of these data uses and machine-learning techniques are new and still relatively untested in Government. Of course, we operate within legal frameworks such as the Data Protection Act and Intellectual Property law. These are flexible but don’t always talk explicitly about the new challenges data science throws up. For example, how are you to explain the decision making process of a deep learning black box algorithm? And if you were able to, how would you do so in plain English and not a row of 0s and 1s?

We want data scientists to feel confident to innovate with data, alongside  the policy makers and operational staff who make daily decisions on the data that the analysts provide –. That’s why we are creating an ethical framework which brings together the relevant parts of the law and ethical considerations into a simple document that helps Government officials decide what it can do and what it should do. We have a moral responsibility to maximise the use of data – which is never more apparent than after incidents of abuse or crime are left undetected – as well as to pay heed to the potential risks of these new tools. The guidelines are draft and not formal government policy, but we want to share them more widely in order to help iterate and improve them further….

So what’s in the framework? There is more detail in the fuller document, but it is based around six key principles:

  1. Start with a clear user need and public benefit: this will help you justify the level of data sensitivity and method you use
  2. Use the minimum level of data necessary to fulfill the public benefit: there are many techniques for doing so, such as de-identification, aggregation or querying against data
  3. Build robust data science models: the model is only as good as the data it contains and while machines are less biased than humans they can get it wrong. It’s critical to be clear about the confidence of the model and think through unintended consequences and biases contained within the data
  4. Be alert to public perceptions: put simply, what would a normal person on the street think about the project?
  5. Be as open and accountable as possible: Transparency is the antiseptic for unethical behavior. Aim to be as open as possible (with explanations in plain English), although in certain public protection cases the ability to be transparent will be constrained.
  6. Keep data safe and secure: this is not restricted to data science projects but we know that the public are most concerned about losing control of their data….(More)”

Citizenship, Social Media, and Big Data: Current and Future Research in the Social Sciences


Homero Gil de Zúñiga at Social Science Computer Review: “This special issue of the Social Science Computer Review provides a sample of the latest strategies employing large data sets in social media and political communication research. The proliferation of information communication technologies, social media, and the Internet, alongside the ubiquity of high-performance computing and storage technologies, has ushered in the era of computational social science. However, in no way does the use of “big data” represent a standardized area of inquiry in any field. This article briefly summarizes pressing issues when employing big data for political communication research. Major challenges remain to ensure the validity and generalizability of findings. Strong theoretical arguments are still a central part of conducting meaningful research. In addition, ethical practices concerning how data are collected remain an area of open discussion. The article surveys studies that offer unique and creative ways to combine methods and introduce new tools while at the same time address some solutions to ethical questions….(More)”

The Upside of Slacktivism


 in Pacific Standard: “When you think of meaningful political action, you probably think of the March on Washington for Jobs and Freedom, or perhaps ACT-UP‘s 1990 protests in San Francisco. You probably don’t think of clicking “like” or “share” on Facetwitstagram—though a new study suggests that those likes and shares may be just as important as marching in the streets, singing songs, and carrying signs.

“The efficacy of online networks in disseminating timely information has been praised by many commentators; at the same time, users are often derided as ‘slacktivists’ because of the shallow commitment involved in clicking a forwarding button,” writes a team led by Pablo Barberá, a political scientist at New York University, in the journal PLoS One.

In other words, it’s easy to argue that sharing a post about climate change and whatnot has no value, since it involves no sacrifice—no standoffs with angry police, no going to jail over taxes you didn’t pay because you opposed the Mexican-American War, not even lost shoes.

On the other hand, maybe sacrifice isn’t the point. Maybe it’s getting attention, and, Barberá and colleagues suggest, slacktivism is actually pretty good at that part—a consequence of just how easy it is to spread the word with the click of a mouse.

The team reached that conclusion after analyzing tens of millions of tweets sent by nearly three million users during the May 2013 anti-government protests in Gezi Park, Istanbul. Among other things, the team identified which tweets were originals rather than retweets, who retweeted whom, and how many followers each user had. That meant Barberá and his team could identify not only how information flowed within the network of protesters, but also how many people that information reached.

Most original tweets came from a relatively small group of protestors using hashtags such as #gezipark, suggesting that information flowed from a core group of protestors toward a less-active periphery. Geographic data backed that up: Around 18 percent of core tweeters were physically present for the Gezi Park demonstrations, compared to a quarter of a percent of peripheral tweeters…..(More)”

The Internet’s Loop of Action and Reaction Is Worsening


Farhad Manjoo in the New York Times: “Donald J. Trump and Hillary Clinton said this week that we should think about shutting down parts of the Internet to stop terrorist groups from inspiring and recruiting followers in distant lands. Mr. Trump even suggested an expert who’d be perfect for the job: “We have to go see Bill Gates and a lot of different people that really understand what’s happening, and we have to talk to them — maybe, in certain areas, closing that Internet up in some way,” he said on Monday in South Carolina.

Many online responded to Mr. Trump and Mrs. Clinton with jeers, pointing out both constitutional and technical limits to their plans. Mr. Gates, the Microsoft co-founder who now spends much of his time on philanthropy, has as much power to close down the Internet as he does to fix Mr. Trump’s hair.

Yet I had a different reaction to Mr. Trump and Mrs. Clinton’s fantasy of a world in which you could just shut down parts of the Internet that you didn’t like: Sure, it’s impossible, but just imagine if we could do it, just for a bit. Wouldn’t it have been kind of a pleasant dream world, in these overheated last few weeks, to have lived free of social media?

Hear me out. If you’ve logged on to Twitter and Facebook in the waning weeks of 2015, you’ve surely noticed that the Internet now seems to be on constant boil. Your social feed has always been loud, shrill, reflexive and ugly, but this year everything has been turned up to 11. The Islamic State’s use of the Internet is perhaps only the most dangerous manifestation of what, this year, became an inescapable fact of online life: The extremists of all stripes are ascendant, and just about everywhere you look, much of the Internet is terrible.“The academic in me says that discourse norms have shifted,” said Susan Benesch, a faculty associate at Harvard’s Berkman Center for Internet & Society and the director of the Dangerous Speech Project, an effort to study speech that leads to violence. “It’s become so common to figuratively walk through garbage and violent imagery online that people have accepted it in a way. And it’s become so noisy that you have to shout more loudly, and more shockingly, to be heard.”

You might argue that the angst online is merely a reflection of the news. Terrorism, intractable warfare, mass shootings, a hyperpartisan presidential race, police brutality, institutional racism and the protests over it have dominated the headlines. It’s only natural that the Internet would get a little out of control over that barrage.

But there’s also a way in which social networks seem to be feeding a cycle of action and reaction. In just about every news event, the Internet’s reaction to the situation becomes a follow-on part of the story, so that much of the media establishment becomes trapped in escalating, infinite loops of 140-character, knee-jerk insta-reaction.

“Presidential elections have always been pretty nasty, but these days the mudslinging is omnipresent in a way that’s never been the case before,” said Whitney Phillips, an assistant professor of literary studies and writing at Mercer University, who is the author of “This Is Why We Can’t Have Nice Things,” a study of online “trolling.” “When Donald Trump says something that I would consider insane, it’s not just that it gets reported on by one or two or three outlets, but it becomes this wave of iterative content on top of content on top of content in your feed, taking over everything you see.”

The spiraling feedback loop is exhausting and rarely illuminating. The news brims with instantly produced “hot takes” and a raft of fact-free assertions. Everyone — yours truly included — is always on guard for the next opportunity to meme-ify outrage: What crazy thing did Trump/Obama/The New York Times/The New York Post/Rush Limbaugh/etc. say now, and what clever quip can you fit into a tweet to quickly begin collecting likes?

There is little room for indulging nuance, complexity, or flirting with the middle ground. In every issue, you are either with one aggrieved group or the other, and the more stridently you can express your disdain — short ofhurling profanities at the president on TV, which will earn you a brief suspension — the better reaction you’ll get….(More)”

Join Campaigns, Shop Ethically, Hit the Man Where It Hurts—All Within an App


PSFK: “Buycott is an app that wants to make shopping ethically a little easier. Join campaigns to support causes you care about, scan barcodes on products you’re considering to view company practices, and voice your concerns directly through the free app, available for iOS and Android.

buycott-app-1-psfk

Ethical campaigns are crowdsourced, and range from environmental to political and social concerns. Current campaigns include a demand for GMO labeling, supporting fair trade, ending animal testing, and more. You can read a description of the issue in question and see a list of companies to avoid and support under each campaign.

Scan barcodes of potential purchases to see if it the parent companies behind them hold up to your ethical standards. If the company doesn’t stand up, the app will suggest more ethically aligned alternatives.

buycott-app-2-psfk

According to Ivan Pardo, founder and CEO of Buycott, the app is designed to help consumers make informed shopping decisions “they can feel good about.”

“As consumers become increasingly conscientious about the impact of their purchases and shop to reflect these principles, Buycott provides users with transparency into the business practices of the companies marketing to them.”

Users can contact problematic companies through the app, using email, Facebook or Twitter. The app traces each product all the way back to its umbrella or parent company (which means the same few corporate giants are likely to show up on a few do-not buy lists)….(More)

Forging Trust Communities: How Technology Changes Politics


Book by Irene S. Wu: “Bloggers in India used social media and wikis to broadcast news and bring humanitarian aid to tsunami victims in South Asia. Terrorist groups like ISIS pour out messages and recruit new members on websites. The Internet is the new public square, bringing to politics a platform on which to create community at both the grassroots and bureaucratic level. Drawing on historical and contemporary case studies from more than ten countries, Irene S. Wu’s Forging Trust Communities argues that the Internet, and the technologies that predate it, catalyze political change by creating new opportunities for cooperation. The Internet does not simply enable faster and easier communication, but makes it possible for people around the world to interact closely, reciprocate favors, and build trust. The information and ideas exchanged by members of these cooperative communities become key sources of political power akin to military might and economic strength.

Wu illustrates the rich world history of citizens and leaders exercising political power through communications technology. People in nineteenth-century China, for example, used the telegraph and newspapers to mobilize against the emperor. In 1970, Taiwanese cable television gave voice to a political opposition demanding democracy. Both Qatar (in the 1990s) and Great Britain (in the 1930s) relied on public broadcasters to enhance their influence abroad. Additional case studies from Brazil, Egypt, the United States, Russia, India, the Philippines, and Tunisia reveal how various technologies function to create new political energy, enabling activists to challenge institutions while allowing governments to increase their power at home and abroad.

Forging Trust Communities demonstrates that the way people receive and share information through network communities reveals as much about their political identity as their socioeconomic class, ethnicity, or religion. Scholars and students in political science, public administration, international studies, sociology, and the history of science and technology will find this to be an insightful and indispensable work….(More)”