Not everyone in advanced economies is using social media


 at Pew: “Despite the seeming ubiquity of social media platforms like Facebook and Twitter, many in Europe, the U.S., Canada, Australia and Japan do not report regularly visiting social media sites. But majorities in all of the 14 countries surveyed say they at least use the internet.

Social media use is relatively common among people in Sweden, the Netherlands, Australia and the U.S. Around seven-in-ten report using social networking sites like Facebook and Twitter, but that still leaves a significant minority of the population in those countries (around 30%) who are non-users.

At the other end of the spectrum, in France, only 48% say they use social networking sites. That figure is even lower in Greece (46%), Japan (43%) and Germany (37%). In Germany, this means that more than half of internet users say they do not use social media. 

The differences in reported social media use across the 14 countries are due in part to whether people use the internet, since low rates of internet access limit the potential social media audience. While fewer than one-in-ten Dutch (5%), Swedes (7%) and Australians (7%) don’t access the internet or own a smartphone, that figure is 40% in Greece, 33% in Hungary and 29% in Italy.

However, internet access doesn’t guarantee social media use. In Germany, for example, 85% of adults are online, but less than half of this group report using Facebook, Twitter or Xing. A similar pattern is seen in some of the other developed economies polled, including Japan and France, where social media use is low relative to overall internet penetration….(More)

The U.S. Federal AI Personal Assistant Pilot


/AI-Assistant-Pilot: “Welcome to GSA’s Emerging Citizen Technology program’s pilot for the effective, efficient and accountable introduction and benchmark of public service information integration into consumer-available AI Personal Assistants (IPAs) including Amazon Alexa, Google Assistant, Microsoft Cortana, and Facebook Messenger’s chatbot service — and in the process lay a strong foundation for opening our programs to self-service programs in the home, mobile devices, automobiles and further.

This pilot will require rapid development and will result in public service concepts reviewed by the platforms of your choosing, as well as the creation of a new field of shared resources and recommendations that any organization can use to deliver our program data into these emerging services.

Principles

The demand for more automated, self-service access to United States public services, when and where citizens need them, grows each day—and so do advances in the consumer technologies like Intelligent Personal Assistants designed to meet those challenges.

The U.S. General Services Administration’s (GSA) Emerging Citizen Technology program, part of the Technology Transformation Service’s Innovation Portfolio, launched an open-sourced pilot to guide dozens of federal programs make public service information available to consumer Intelligent Personal Assistants (IPAs) for the home and office, such as Amazon Alexa, Microsoft Cortana, Google Assistant, and Facebook Messenger.

These same services that help power our homes today will empower the self-driving cars of tomorrow, fuel the Internet of Things, and more. As such, the Emerging Citizen Technology program is working with federal agencies to prepare a solid understanding of the business cases and impact of these advances.

From privacy, security, accessibility, and performance to how citizens can benefit from more efficient and open access to federal services, the program is working with federal agencies to consider all aspects of its implementation. Additionally, by sharing openly with private-sector innovators, small businesses, and new entries into the field, the tech industry will gain increased transparency into working with the federal government….(More)”.

Need an improved solution to a development challenge? Consider collaborative design


Michelle Marshall  at the Inter-American Development Bank: “The challenges faced in the development and public policy arenas are often complex in nature. Devising relevant, practical, and innovative solutions requires intensive research, analysis and expertise from multiple sectors. Could there be a way to streamline this process and also make it more inclusive? 

Collaborative Design, like other open innovation methodologies, leverages the power of a group for collective problem-solving. In particular, it is a process that virtually convenes a diverse group of specialists to support the iterative development of an intervention.

Last year, the Inter-American Development Bank and the New York University’s Governance Lab hosted an initiative called “Smarter Crowdsourcing for Zika“, which brought together health specialists with experts in social media, predictive analytics, and water and sanitation during a series of online sessions to generate innovative responses to the Zika epidemic. Based on this experience, we have considered how to continue applying a similar collaboration-based approach to additional projects in different areas. The result is what we call a “Collaborative Design” approach.

Implementing a Collaborative Design approach along the course of a project can help to achieve the following:

1. Convert knowledge gaps into opportunities…
2. Expand your community of practice across sectors…
3. Identify innovative and practical solutions…

As promising ideas are identified, Collaborative Design requires documenting possible solutions within the framework of an implementation plan, protocol, or other actionable guideline to support their subsequent real-life application. This will help substantiate the most viable interventions that were previously unmapped and also prepare additional practical resources for other project teams in the future.

For instance, the results of the Zika Smarter Crowdsourcing initiative were structured with information related to the costs and timelines to facilitate their implementation in different local contexts….(More)”

These Refugees Created Their Own Aid Agency Within Their Resettlement Camp


Michael Thomas at FastCompany: “…“In the refugee camps, we have two things: people and time,” Jackl explained. He and his friends decided that they would organize people to improve the camp. The idea was to solve two problems at once: Give refugees purpose, and make life in the camp better for everyone….

It began with repurposing shipping material. The men noticed that every day, dozens of shipments of food, medicine, and other aid came to their camp. But once the supplies were unloaded, aid workers would throw the pallets away. Meanwhile, people were sleeping in tents that would flood when it rained. So Jackl led an effort to break the pallets down and use the wood to create platforms on which the tents could sit.

Shortly afterwards, they used scrap wood and torn pieces of fabric to build a school, and eventually found a refugee who was a teacher to lead classes. The philosophy was simple and powerful: Use resources that would otherwise go to waste to improve life in their camp. As word spread of their work on social media, Jackl began to receive offers from people who wanted to donate money to his then unofficial cause. “All these people began asking me ‘What can I do? Can I give you money?’ And I’d tell them, ‘Give me materials,’” he said.

“People think that refugees are weak. But they survived war, smugglers, and the camps,” Jackl explains. His mission is to change the refugee image from one of weakness to one of resilience and strength. Core to that is the idea that refugees can help one another instead of relying on aid workers and NGOs, a philosophy that he adopted from an NGO called Jafra that he worked for in Syria…(More)”

Ten simple rules for responsible big data research


Matthew Zook et al in PLOS Computational Biology: “The use of big data research methods has grown tremendously over the past five years in both academia and industry. As the size and complexity of available datasets has grown, so too have the ethical questions raised by big data research. These questions become increasingly urgent as data and research agendas move well beyond those typical of the computational and natural sciences, to more directly address sensitive aspects of human behavior, interaction, and health. The tools of big data research are increasingly woven into our daily lives, including mining digital medical records for scientific and economic insights, mapping relationships via social media, capturing individuals’ speech and action via sensors, tracking movement across space, shaping police and security policy via “predictive policing,” and much more.

The beneficial possibilities for big data in science and industry are tempered by new challenges facing researchers that often lie outside their training and comfort zone. Social scientists now grapple with data structures and cloud computing, while computer scientists must contend with human subject protocols and institutional review boards (IRBs). While the connection between individual datum and actual human beings can appear quite abstract, the scope, scale, and complexity of many forms of big data creates a rich ecosystem in which human participants and their communities are deeply embedded and susceptible to harm. This complexity challenges any normative set of rules and makes devising universal guidelines difficult.

Nevertheless, the need for direction in responsible big data research is evident, and this article provides a set of “ten simple rules” for addressing the complex ethical issues that will inevitably arise. Modeled on PLOS Computational Biology’s ongoing collection of rules, the recommendations we outline involve more nuance than the words “simple” and “rules” suggest. This nuance is inevitably tied to our paper’s starting premise: all big data research on social, medical, psychological, and economic phenomena engages with human subjects, and researchers have the ethical responsibility to minimize potential harm….

  1. Acknowledge that data are people and can do harm
  2. Recognize that privacy is more than a binary value
  3. Guard against the reidentification of your data
  4. Practice ethical data sharing
  5. Consider the strengths and limitations of your data; big does not automatically mean better
  6. Debate the tough, ethical choices
  7. Develop a code of conduct for your organization, research community, or industry
  8. Design your data and systems for auditability
  9. Engage with the broader consequences of data and analysis practices
  10. Know when to break these rules…(More)”

What Algorithms Want


Book by Ed Finn: “We depend on—we believe in—algorithms to help us get a ride, choose which book to buy, execute a mathematical proof. It’s as if we think of code as a magic spell, an incantation to reveal what we need to know and even what we want. Humans have always believed that certain invocations—the marriage vow, the shaman’s curse—do not merely describe the world but make it. Computation casts a cultural shadow that is shaped by this long tradition of magical thinking. In this book, Ed Finn considers how the algorithm—in practical terms, “a method for solving a problem”—has its roots not only in mathematical logic but also in cybernetics, philosophy, and magical thinking.

Finn argues that the algorithm deploys concepts from the idealized space of computation in a messy reality, with unpredictable and sometimes fascinating results. Drawing on sources that range from Neal Stephenson’s Snow Crash to Diderot’s Encyclopédie, from Adam Smith to the Star Trek computer, Finn explores the gap between theoretical ideas and pragmatic instructions. He examines the development of intelligent assistants like Siri, the rise of algorithmic aesthetics at Netflix, Ian Bogost’s satiric Facebook game Cow Clicker, and the revolutionary economics of Bitcoin. He describes Google’s goal of anticipating our questions, Uber’s cartoon maps and black box accounting, and what Facebook tells us about programmable value, among other things.

If we want to understand the gap between abstraction and messy reality, Finn argues, we need to build a model of “algorithmic reading” and scholarship that attends to process, spearheading a new experimental humanities….(More)”

Will Computer Science become a Social Science?


Paper by Ingo Scholtes, Markus Strohmaier and Frank Schweitzer: “When Tay – a Twitter chatbot developed by Microsoft – was activated this March, the company was taken by surprise by what Tay had become. Within less than 24 hours of conversation with Twitter users Tay had learned to make racist, anti-semitic and misogynistic statements that have raised eyebrows in the Twitter community and beyond. What had happened? While Microsoft certainly tested the chat bot before release, planning for the reactions and the social environment in which it was deployed proved tremendously difficult. Yet, the Tay Twitter chatbot incident is just one example for the many challenges which arise when embedding algorithms and computing systems into an ever increasing spectrum of social systems. In this viewpoint we argue that, due to the resulting feedback loops by which computing technologies impact social behavior and social behavior feeds back on (learning) computing systems, we face the risk of losing control over the systems that we engineer. The result are unintended consequences that affect both the technical and social dimension of computing systems, and which computer science is currently not well-prepared to address. Highlighting exemplary challenges in core areas like (1) algorithm design, (2) cyber-physical systems, and (3) software engineering, we argue that social aspects must be turned into first-class citizens of our system models. We further highlight that the social sciences, in particular the interdisciplinary field of Computational Social Science [1], provide us with means to quantitatively analyze, model and predict human behavior. As such, a closer integration between computer science and social sciences not only provides social scientists with new ways to understand social phenomena. It also helps us to regain control over the systems that we engineer….(More)”

Confused by data visualisation? Here’s how to cope in a world of many features


 in The Conversation: “The late data visionary Hans Rosling mesmerised the world with his work, contributing to a more informed society. Rosling used global health data to paint a stunning picture of how our world is a better place now than it was in the past, bringing hope through data.

Now more than ever, data are collected from every aspect of our lives. From social media and advertising to artificial intelligence and automated systems, understanding and parsing information have become highly valuable skills. But we often overlook the importance of knowing how to communicate data to peers and to the public in an effective, meaningful way.

The first tools that come to mind in considering how to best communicate data – especially statistics – are graphs and scatter plots. These simple visuals help us understand elementary causes and consequences, trends and so on. They are invaluable and have an important role in disseminating knowledge.

Data visualisation can take many other forms, just as data itself can be interpreted in many different ways. It can be used to highlight important achievements, as Bill and Melinda Gates have shown with their annual letters in which their main results and aspirations are creatively displayed.

Everyone has the potential to better explore data sets and provide more thorough, yet simple, representations of facts. But how can do we do this when faced with daunting levels of complex data?

A world of too many features

We can start by breaking the data down. Any data set consists of two main elements: samples and features. The former correspond to individual elements in a group; the latter are the characteristics they share….

Venturing into network analysis is easier than undertaking dimensionality reduction, since usually a high level of programming skills is not required. Widely available user-friendly software and tutorials allow people new to data visualisation to explore several aspects of network science.

The world of data visualisation is vast and it goes way beyond what has been introduced here, but those who actually reap its benefits, garnering new insights and becoming agents of positive and efficient change, are few. In an age of overwhelming information, knowing how to communicate data can make a difference – and it can help keep data’s relevance in check…(More)”

Prediction and Inference from Social Networks and Social Media


Book edited by Kawash, Jalal, Agarwal, Nitin, Özyer, Tansel: “This book addresses the challenges of social network and social media analysis in terms of prediction and inference. The chapters collected here tackle these issues by proposing new analysis methods and by examining mining methods for the vast amount of social content produced. Social Networks (SNs) have become an integral part of our lives; they are used for leisure, business, government, medical, educational purposes and have attracted billions of users. The challenges that stem from this wide adoption of SNs are vast. These include generating realistic social network topologies, awareness of user activities, topic and trend generation, estimation of user attributes from their social content, and behavior detection. This text has applications to widely used platforms such as Twitter and Facebook and appeals to students, researchers, and professionals in the field….(More)”