Twiplomacy Study 2016


Executive Summary: “Social media has become diplomacy’s significant other. It has gone from being an afterthought to being the very first thought of world leaders and governments across the globe, as audiences flock to their newsfeeds for the latest news. This recent worldwide embrace of online channels has brought with it a wave of openness and transparency that has never been experienced before. Social media provides a platform for unconditional communication, and has become a communicator’s most powerful tool. Twitter in particular, has even become a diplomatic ‘barometer, a tool used to analyze and forecast international relations.

There is a vast array of social networks for government communicators to choose from. While some governments and foreign ministries still ponder the pros and cons of any social media engagement, others have gone beyond Twitter, Facebook and Instagram to reach their target audiences, even embracing emerging platforms such as Snapchat, WhatsApp and Telegram where communications are under the radar and almost impossible to track.

Burson-Marsteller’s 2016 Twiplomacy study has been expanded to include other social media platforms such as Facebook, Instagram and YouTube, as well as more niche digital diplomacy platforms such as Snapchat, LinkedIn, Google+,Periscope and Vine.

There is a growing digital divide between governments that are active on social media with dedicated teams and those that see digital engagement as an afterthought and so devote few resources to it. There is still a small number of government leaders who refuse to embrace the new digital world and, for these few, their community managers struggle to bring their organizations into the digital century.

Over the past year, the most popular world leaders on social media have continued to increase their audiences, while new leaders have emerged in the Twittersphere. Argentina’s Mauricio Macri, Canada’s Justin Trudeau and U.S. President Barack Obama have all made a significant impact on Twitter and Facebook over the past year.

Obama’s social media communication has become even more personal through his @POTUS Twitter account and Facebook page, and the first “president of the social media age” will leave the White House in January 2017 with an incredible 137 million fans, followers and subscribers. Beyond merely Twitter and Facebook, world leaders such as the Argentinian President have also become active on new channels like Snapchat to reach a younger audience and potential future voters. Similarly, a number of governments, mainly in Latin America, have started to use Periscope, a cost-effective medium to live-stream their press conferences.

We have witnessed occasional public interactions between leaders, namely the friendly fighting talk between the Obamas, the Queen of England and Canada’s Justin Trudeau. Foreign ministries continue to expand their diplomatic and digital networks by following each other and creating coalitions on specific topics, in particular the fight against ISIS….

A number of world leaders, including the President of Colombia and Australia’s Julie Bishop, also use emojis to brighten up their tweets, creating what can be described as a new diplomatic sign language. The Foreign Ministry in Finland has even produced its own set of 49 emoticons depicting summer and winter in the Nordic country.

We asked a number of digital leaders of some of the best connected foreign ministries and governments to share their thoughts on their preferred social media channel and examples of their best campaigns on our blog. You will learn:

Here is our list of the #Twiplomacy Top Twenty Twitterati in 2016….(More)”

Civic Media: Technology, Design, Practice


Book edited by Eric Gordon and Paul Mihailidis: “Countless people around the world harness the affordances of digital media to enable democratic participation, coordinate disaster relief, campaign for policy change, and strengthen local advocacy groups. The world watched as activists used social media to organize protests during the Arab Spring, Occupy Wall Street, and Hong Kong’s Umbrella Revolution. Many governmental and community organizations changed their mission and function as they adopted new digital tools and practices. This book examines the use of “civic media”—the technologies, designs, and practices that support connection through common purpose in civic, political, and social life. Scholars from a range of disciplines and practitioners from a variety of organizations offer analyses and case studies that explore the theory and practice of civic media.
The contributors set out the conceptual context for the intersection of civic and media; examine the pressure to innovate and the sustainability of innovation; explore play as a template for resistance; look at civic education; discuss media-enabled activism in communities; and consider methods and funding for civic media research. The case studies that round out each section range from a “debt resistance” movement to government service delivery ratings to the “It Gets Better” campaign aimed at combating suicide among lesbian, gay, bisexual, transgender, and queer youth. The book offers a valuable interdisciplinary dialogue on the challenges and opportunities of the increasingly influential space of civic media….(More)”

Smart crowds in smart cities: real life, city scale deployments of a smartphone based participatory crowd management platform


Tobias FrankePaul Lukowicz and Ulf Blanke at the Journal of Internet Services and Applications: “Pedestrian crowds are an integral part of cities. Planning for crowds, monitoring crowds and managing crowds, are fundamental tasks in city management. As a consequence, crowd management is a sprawling R&D area (see related work) that includes theoretical models, simulation tools, as well as various support systems. There has also been significant interest in using computer vision techniques to monitor crowds. However, overall, the topic of crowd management has been given only little attention within the smart city domain. In this paper we report on a platform for smart, city-wide crowd management based on a participatory mobile phone sensing platform. Originally, the apps based on this platform have been conceived as a technology validation tool for crowd based sensing within a basic research project. However, the initial deployments at the Notte Bianca Festival1 in Malta and at the Lord Mayor’s Show in London2 generated so much interest within the civil protection community that it has gradually evolved into a full-blown participatory crowd management system and is now in the process of being commercialized through a startup company. Until today it has been deployed at 14 events in three European countries (UK, Netherlands, Switzerland) and used by well over 100,000 people….

Obtaining knowledge about the current size and density of a crowd is one of the central aspects of crowd monitoring . For the last decades, automatic crowd monitoring in urban areas has mainly been performed by means of image processing . One use case for such video-based applications can be found in, where a CCTV camera-based system is presented that automatically alerts the staff of subway stations when the waiting platform is congested. However, one of the downsides of video-based crowd monitoring is the fact that video cameras tend to be considered as privacy invading. Therefore,  presents a privacy preserving approach to video-based crowd monitoring where crowd sizes are estimated without people models or object tracking.

With respect to the mitigation of catastrophes induced by panicking crowds (e.g. during an evacuation), city planners and architects increasingly rely on tools simulating crowd behaviors in order to optimize infrastructures. Murakami et al. presents an agent based simulation for evacuation scenarios. Shendarkar et al. presents a work that is also based on BSI (believe, desire, intent) agents – those agents however are trained in a virtual reality environment thereby giving greater flexibility to the modeling. Kluepfel et al. on the other hand uses a cellular automaton model for the simulation of crowd movement and egress behavior.

With smartphones becoming everyday items, the concept of crowd sourcing information from users of mobile application has significantly gained traction. Roitman et al. presents a smart city system where the crowd can send eye witness reports thereby creating deeper insights for city officials. Szabo et al. takes this approach one step further and employs the sensors built into smartphones for gathering data for city services such as live transit information. Ghose et al. utilizes the same principle for gathering information on road conditions. Pan et al. uses a combination of crowd sourcing and social media analysis for identifying traffic anomalies….(More)”.

Reining in the Big Promise of Big Data: Transparency, Inequality, and New Regulatory Frontiers


Paper by Philipp Hacker and Bilyana Petkova: “The growing differentiation of services based on Big Data harbors the potential for both greater societal inequality and for greater equality. Anti-discrimination law and transparency alone, however, cannot do the job of curbing Big Data’s negative externalities while fostering its positive effects.

To rein in Big Data’s potential, we adapt regulatory strategies from behavioral economics, contracts and criminal law theory. Four instruments stand out: First, active choice may be mandated between data collecting services (paid by data) and data free services (paid by money). Our suggestion provides concrete estimates for the price range of a data free option, sheds new light on the monetization of data collecting services, and proposes an “inverse predatory pricing” instrument to limit excessive pricing of the data free option. Second, we propose using the doctrine of unconscionability to prevent contracts that unreasonably favor data collecting companies. Third, we suggest democratizing data collection by regular user surveys and data compliance officers partially elected by users. Finally, we trace back new Big Data personalization techniques to the old Hartian precept of treating like cases alike and different cases – differently. If it is true that a speeding ticket over $50 is less of a disutility for a millionaire than for a welfare recipient, the income and wealth-responsive fines powered by Big Data that we suggest offer a glimpse into the future of the mitigation of economic and legal inequality by personalized law. Throughout these different strategies, we show how salience of data collection can be coupled with attempts to prevent discrimination against and exploitation of users. Finally, we discuss all four proposals in the context of different test cases: social media, student education software and credit and cell phone markets.

Many more examples could and should be discussed. In the face of increasing unease about the asymmetry of power between Big Data collectors and dispersed users, about differential legal treatment, and about the unprecedented dimensions of economic inequality, this paper proposes a new regulatory framework and research agenda to put the powerful engine of Big Data to the benefit of both the individual and societies adhering to basic notions of equality and non-discrimination….(More)”

Scientists Are Just as Confused About the Ethics of Big-Data Research as You


Sarah Zhang at Wired: “When a rogue researcher last week released 70,000 OkCupid profiles, complete with usernames and sexual preferences, people were pissed. When Facebook researchers manipulated stories appearing in Newsfeeds for a mood contagion study in 2014, people were really pissed. OkCupid filed a copyright claim to take down the dataset; the journal that published Facebook’s study issued an “expression of concern.” Outrage has a way of shaping ethical boundaries. We learn from mistakes.

Shockingly, though, the researchers behind both of those big data blowups never anticipated public outrage. (The OkCupid research does not seem to have gone through any kind of ethical review process, and a Cornell ethics review board approved the Facebook experiment.) And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

Even fewer have computer science or security expertise, which may be necessary to protect participants in this new kind of research. “The IRB may make very different decisions based on who is on the board, what university it is, and what they’re feeling that day,” says Kelsey Finch, policy counsel at the Future of Privacy Forum. There are hundreds of these IRBs in the US—and they’re grappling with research ethics in the digital age largely on their own….

Or maybe other institutions, like the open science repositories asking researchers to share data, should be picking up the slack on ethical issues. “Someone needs to provide oversight, but the optimal body is unlikely to be an IRB, which usually lacks subject matter expertise in de-identification and re-identification techniques,” Michelle Meyer, a bioethicist at Mount Sinai, writes in an email.

Even among Internet researchers familiar with the power of big data, attitudes vary. When Katie Shilton, an information technology research at the University of Maryland, interviewed 20 online data researchers, she found “significant disagreement” over issues like the ethics of ignoring Terms of Service and obtaining informed consent. Surprisingly, the researchers also said that ethical review boards had never challenged the ethics of their work—but peer reviewers and colleagues had. Various groups like theAssociation of Internet Researchers and the Center for Applied Internet Data Analysis have issued guidelines, but the people who actually have power—those on institutional review boards–are only just catching up.

Outside of academia, companies like Microsoft have started to institute their own ethical review processes. In December, Finch at the Future of Privacy Forum organized a workshop called Beyond IRBs to consider processes for ethical review outside of federally funded research. After all, modern tech companies like Facebook, OkCupid, Snapchat, Netflix sit atop a trove of data 20th century social scientists could have only dreamed up.

Of course, companies experiment on us all the time, whether it’s websites A/B testing headlines or grocery stores changing the configuration of their checkout line. But as these companies hire more data scientists out of PhD programs, academics are seeing an opportunity to bridge the divide and use that data to contribute to public knowledge. Maybe updated ethical guidelines can be forged out of those collaborations. Or it just might be a mess for a while….(More)”

Virtual memory: the race to save the information age


Review by Richard Ovenden in the Financial Times of:
You Could Look It Up: The Reference Shelf from Ancient Babylon to Wikipedia, by Jack Lynch, Bloomsbury, RRP£25/$30, 464 pages

When We Are No More: How Digital Memory Is Shaping Our Future, by Abbey Smith Rumsey, Bloomsbury, RRP£18.99/$28, 240 pages

Ctrl + Z: The Right to Be Forgotten, by Meg Leta Jones, NYU Press, RRP£20.99/$29.95, 284 pages

“…For millions of people, technological devices have become essential tools in keeping memories alive — to the point where it can feel as though events without an impression in silicon have somehow not been fully experienced. In under three decades, the web has expanded to contain more than a billion sites. Every day about 300m digital photographs, more than 100 terabytes’ worth, are uploaded to Facebook. An estimated 204m emails are sent every minute and, with 5bn mobile devices in existence, the generation of new content looks set to continue its rapid growth.

Is the abundance of information in the age of Google and Facebook storing up problems for future generations? Richard Ovenden, who as Bodley’s Librarian is responsible for the research libraries of the University of Oxford, talks about the opportunites and concerns of the digitisation of memory with John Thornhill, the FT’s innovation editor

We celebrate this growth, and rightly. Today knowledge is created and consumed at a rate that would have been inconceivable a generation ago; instant access to the fruits of millennia of civilisation now seems like a natural state of affairs. Yet we overlook — at our peril — just how unstable and transient much of this information is. Amid the proliferation there is also constant decay: phenomena such as “bit rot” (the degradation of software programs over time), “data rot” (the deterioration of digital storage media) and “link rot” (web links pointing to online resources that have become permanently unavailable) can render information inaccessible. This affects everything from holiday photos and email correspondence to official records: to give just one example, a Harvard study published in 2013 found that 50 per cent of links in the US Supreme Court opinions website were broken.

Are we creating a problem that future generations will not be able to solve? Could the early decades of the 21st century even come to seem, in the words of the internet pioneer Vint Cerf, like a“digital Dark Age”? Whether or not such fears are realised, it is becoming increasingly clear that the migration of knowledge to formats permitting rapid and low-cost copying and dissemination, but in which the base information cannot survive without complex and expensive intervention, requires that we choose, more actively than ever before, what to remember and what to forget….(More)”

Post, Mine, Repeat: Social Media Data Mining Becomes Ordinary


In this book, Helen Kennedy argues that as social media data mining becomes more and more ordinary, as we post, mine and repeat, new data relations emerge. These new data relations are characterised by a widespread desire for numbers and the troubling consequences of this desire, and also by the possibility of doing good with data and resisting data power, by new and old concerns, and by instability and contradiction. Drawing on action research with public sector organisations, interviews with commercial social insights companies and their clients, focus groups with social media users and other research, Kennedy provides a fascinating and detailed account of living with social media data mining inside the organisations that make up the fabric of everyday life….(More)”

We know where you live


MIT News Office: “From location data alone, even low-tech snoopers can identify Twitter users’ homes, workplaces….Researchers at MIT and Oxford University have shown that the location stamps on just a handful of Twitter posts — as few as eight over the course of a single day — can be enough to disclose the addresses of the poster’s home and workplace to a relatively low-tech snooper.

The tweets themselves might be otherwise innocuous — links to funny videos, say, or comments on the news. The location information comes from geographic coordinates automatically associated with the tweets.

Twitter’s location-reporting service is off by default, but many Twitter users choose to activate it. The new study is part of a more general project at MIT’s Internet Policy Research Initiative to help raise awareness about just how much privacy people may be giving up when they use social media.

The researchers describe their research in a paper presented last week at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, where it received an honorable mention in the best-paper competition, a distinction reserved for only 4 percent of papers accepted to the conference.

“Many people have this idea that only machine-learning techniques can discover interesting patterns in location data,” says Ilaria Liccardi, a research scientist at MIT’s Internet Policy Research Initiative and first author on the paper. “And they feel secure that not everyone has the technical knowledge to do that. With this study, what we wanted to show is that when you send location data as a secondary piece of information, it is extremely simple for people with very little technical knowledge to find out where you work or live.”

Conclusions from clustering

In their study, Liccardi and her colleagues — Alfie Abdul-Rahman and Min Chen of Oxford’s e-Research Centre in the U.K. — used real tweets from Twitter users in the Boston area. The users consented to the use of their data, and they also confirmed their home and work addresses, their commuting routes, and the locations of various leisure destinations from which they had tweeted.

The time and location data associated with the tweets were then presented to a group of 45 study participants, who were asked to try to deduce whether the tweets had originated at the Twitter users’ homes, their workplaces, leisure destinations, or locations along their commutes. The participants were not recruited on the basis of any particular expertise in urban studies or the social sciences; they just drew what conclusions they could from location clustering….

Predictably, participants fared better with map-based representations, correctly identifying Twitter users’ homes roughly 65 percent of the time and their workplaces at closer to 70 percent. Even the tabular representation was informative, however, with accuracy rates of just under 50 percent for homes and a surprisingly high 70 percent for workplaces….(More; Full paper )”

Robot Regulators Could Eliminate Human Error


 in the San Francisco Chronicle and Regblog: “Long a fixture of science fiction, artificial intelligence is now part of our daily lives, even if we do not realize it. Through the use of sophisticated machine learning algorithms, for example, computers now work to filter out spam messages automatically from our email. Algorithms also identify us by our photos on Facebook, match us with new friends on online dating sites, and suggest movies to watch on Netflix.

These uses of artificial intelligence hardly seem very troublesome. But should we worry if government agencies start to use machine learning?

Complaints abound even today about the uncaring “bureaucratic machinery” of government. Yet seeing how machine learning is starting to replace jobs in the private sector, we can easily fathom a literal machinery of government in which decisions made by human public servants increasingly become made by machines.

Technologists warn of an impending “singularity,” when artificial intelligence surpasses human intelligence. Entrepreneur Elon Musk cautions that artificial intelligence poses one of our “biggest existential threats.” Renowned physicist Stephen Hawking eerily forecasts that artificial intelligence might even “spell the end of the human race.”

Are we ready for a world of regulation by robot? Such a world is closer than we think—and it could actually be worth welcoming.

Already government agencies rely on machine learning for a variety of routine functions. The Postal Service uses learning algorithms to sort mail, and cities such as Los Angeles use them to time their traffic lights. But while uses like these seem relatively benign, consider that machine learning could also be used to make more consequential decisions. Disability claims might one day be processed automatically with the aid of artificial intelligence. Licenses could be awarded to airplane pilots based on what kinds of safety risks complex algorithms predict each applicant poses.

Learning algorithms are already being explored by the Environmental Protection Agency to help make regulatory decisions about what toxic chemicals to control. Faced with tens of thousands of new chemicals that could potentially be harmful to human health, federal regulators have supported the development of a program to prioritize which of the many chemicals in production should undergo the more in-depth testing. By some estimates, machine learning could save the EPA up to $980,000 per toxic chemical positively identified.

It’s not hard then to imagine a day in which even more regulatory decisions are automated. Researchers have shown that machine learning can lead to better outcomes when determining whether parolees ought to be released or domestic violence orders should be imposed. Could the imposition of regulatory fines one day be determined by a computer instead of a human inspector or judge? Quite possibly so, and this would be a good thing if machine learning could improve accuracy, eliminate bias and prejudice, and reduce human error, all while saving money.

But can we trust a government that bungled the initial rollout of Healthcare.gov to deploy artificial intelligence responsibly? In some circumstances we should….(More)”

Big data’s ‘streetlight effect’: where and how we look affects what we see


 at the Conversation: “Big data offers us a window on the world. But large and easily available datasets may not show us the world we live in. For instance, epidemiological models of the recent Ebola epidemic in West Africa using big data consistently overestimated the risk of the disease’s spread and underestimated the local initiatives that played a critical role in controlling the outbreak.

Researchers are rightly excited about the possibilities offered by the availability of enormous amounts of computerized data. But there’s reason to stand back for a minute to consider what exactly this treasure trove of information really offers. Ethnographers like me use a cross-cultural approach when we collect our data because family, marriage and household mean different things in different contexts. This approach informs how I think about big data.

We’ve all heard the joke about the drunk who is asked why he is searching for his lost wallet under the streetlight, rather than where he thinks he dropped it. “Because the light is better here,” he said.

This “streetlight effect” is the tendency of researchers to study what is easy to study. I use this story in my course on Research Design and Ethnographic Methods to explain why so much research on disparities in educational outcomes is done in classrooms and not in students’ homes. Children are much easier to study at school than in their homes, even though many studies show that knowing what happens outside the classroom is important. Nevertheless, schools will continue to be the focus of most research because they generate big data and homes don’t.

The streetlight effect is one factor that prevents big data studies from being useful in the real world – especially studies analyzing easily available user-generated data from the Internet. Researchers assume that this data offers a window into reality. It doesn’t necessarily.

Looking at WEIRDOs

Based on the number of tweets following Hurricane Sandy, for example, it might seem as if the storm hit Manhattan the hardest, not the New Jersey shore. Another example: the since-retired Google Flu Trends, which in 2013 tracked online searches relating to flu symptoms to predict doctor visits, but gave estimates twice as high as reports from the Centers for Disease Control and Prevention. Without checking facts on the ground, researchers may fool themselves into thinking that their big data models accurately represent the world they aim to study.

The problem is similar to the “WEIRD” issue in many research studies. Harvard professor Joseph Henrich and colleagues have shown that findings based on research conducted with undergraduates at American universities – whom they describe as “some of the most psychologically unusual people on Earth” – apply only to that population and cannot be used to make any claims about other human populations, including other Americans. Unlike the typical research subject in psychology studies, they argue, most people in the world are not from Western, Educated, Industrialized, Rich and Democratic societies, i.e., WEIRD.

Twitter users are also atypical compared with the rest of humanity, giving rise to what our postdoctoral researcher Sarah Laborde has dubbed the “WEIRDO” problem of data analytics: most people are not Western, Educated, Industrialized, Rich, Democratic and Online.

Context is critical

Understanding the differences between the vast majority of humanity and that small subset of people whose activities are captured in big data sets is critical to correct analysis of the data. Considering the context and meaning of data – not just the data itself – is a key feature of ethnographic research, argues Michael Agar, who has written extensively about how ethnographers come to understand the world….(https://theconversation.com/big-datas-streetlight-effect-where-and-how-we-look-affects-what-we-see-58122More)”