Corporate Social Responsibility for a Data Age


Stefaan G. Verhulst in the Stanford Social Innovation Review: “Proprietary data can help improve and save lives, but fully harnessing its potential will require a cultural transformation in the way companies, governments, and other organizations treat and act on data….

We live, as it is now common to point out, in an era of big data. The proliferation of apps, social media, and e-commerce platforms, as well as sensor-rich consumer devices like mobile phones, wearable devices, commercial cameras, and even cars generate zettabytes of data about the environment and about us.

Yet much of the most valuable data resides with the private sector—for example, in the form of click histories, online purchases, sensor data, and call data records. This limits its potential to benefit the public and to turn data into a social asset. Consider how data held by business could help improve policy interventions (such as better urban planning) or resiliency at a time of climate change, or help design better public services to increase food security.

Data responsibility suggests steps that organizations can take to break down these private barriers and foster so-called data collaboratives, or ways to share their proprietary data for the public good. For the private sector, data responsibility represents a new type of corporate social responsibility for the 21st century.

While Nepal’s Ncell belongs to a relatively small group of corporations that have shared their data, there are a few encouraging signs that the practice is gaining momentum. In Jakarta, for example, Twitter exchanged some of its data with researchers who used it to gather and display real-time information about massive floods. The resulting website, PetaJakarta.org, enabled better flood assessment and management processes. And in Senegal, the Data for Development project has brought together leading cellular operators to share anonymous data to identify patterns that could help improve health, agriculture, urban planning, energy, and national statistics.

Examples like this suggest that proprietary data can help improve and save lives. But to fully harness the potential of data, data holders need to fulfill at least three conditions. I call these the “the three pillars of data responsibility.”…

The difficulty of translating insights into results points to some of the larger social, political, and institutional shifts required to achieve the vision of data responsibility in the 21st century. The move from data shielding to data sharing will require that we make a cultural transformation in the way companies, governments, and other organizations treat and act on data. We must incorporate new levels of pro-activeness, and make often-unfamiliar commitments to transparency and accountability.

By way of conclusion, here are four immediate steps—essential but not exhaustive—we can take to move forward:

  1. Data holders should issue a public commitment to data responsibility so that it becomes the default—an expected, standard behavior within organizations.
  2. Organizations should hire data stewards to determine what and when to share, and how to protect and act on data.
  3. We must develop a data responsibility decision tree to assess the value and risk of corporate data along the data lifecycle.
  4. Above all, we need a data responsibility movement; it is time to demand data responsibility to ensure data improves and safeguards people’s lives…(More)”

Dumpster diving made easier with food donation points


Springwise: “With food waste a substantial contributor to both environmental and social problems, communities around the world are trying to find ways to make better use of leftovers as well as reduce the overall production of unused foodstuffs. One of the biggest challenges in getting leftovers to the people who need them is the logistics of finding and connecting the relevant groups and transporting the food. Several on-demand apps, like this one that matches homeless shelters with companies that have leftover food, are taking the guesswork out of what to do with available food. And retailers are getting smarter, like this one in the United States, now selling produce that would previously have been rejected for aesthetic reasons only.

In Brazil, the Makers Society collective designed a campaign called Prato de Rua (Street Dish) to help link people in possession of edible leftovers with community members in need. The campaign centers around a sticker that is affixed to the side of city dumpsters requesting that donated food be left at the specific points. By providing a more organized approach to getting rid of leftover food, the collective hopes to help people think more carefully about what they are getting rid of and why. At the same time, the initiative helps people who would otherwise be forced to go through the contents of a dumpster for edible remains, access good food more safely and swiftly.

The campaign sticker is available for download for communities globally to take on and adapt the idea….(More)”

Understanding Actionable Intelligence for Social Policy


Video on “The Actionable Intelligence (AI) model is a new approach to policy development. The AI approach is supported by Integrated Data Systems (IDS) which link administrative records from multiple agencies to give a broader view of social problems and policy solutions. The use of linked administrative data allows policy analysts, program evaluators and social innovators to test new social program ideas at a much lower cost and higher speed. AI uses these IDS to create a newly informed dialogue among executive leaders, stakeholders and researchers regarding what works best, for whom and in the most cost effective way….(More videos from AISP-UPENN)

Understanding Actionable Intelligence for Social Policy from AISP_UPENN on Vimeo.

Recovering from disasters: Social networks matter more than bottled water and batteries


 at The Conversation: “Almost six years ago, Japan faced a paralyzing triple disaster: a massive earthquake, tsunami, and nuclear meltdowns that forced 470,000 people to evacuate from more than 80 towns, villages and cities. My colleagues and I investigated how communities in the hardest-hit areas reacted to these shocks, and found that social networks – the horizontal and vertical ties that connect us to others – are our most important defense against disasters….

We studied more than 130 cities, towns and villages in Tohoku, looking at factors such as exposure to the ocean, seawall height, tsunami height, voting patterns, demographics, and social capital. We found that municipalities which had higher levels of trust and interaction had lower mortality levels after we controlled for all of those confounding factors.

The kind of social tie that mattered here was horizontal, between town residents. It was a surprising finding given that Japan has spent a tremendous amount of money on physical infrastructure such as seawalls, but invested very little in building social ties and cohesion.

Based on interviews with survivors and a review of the data, we believe that communities with more ties, interaction and shared norms worked effectively to provide help to kin, family and neighbors. In many cases only 40 minutes separated the earthquake and the arrival of the tsunami. During that time, residents literally picked up and carried many elderly people out of vulnerable, low-lying areas. In high-trust neighborhoods, people knocked on doors of those who needed help and escorted them out of harm’s way….

In another study I worked to understand why some 40 cities, towns and villages across the Tohoku region had rebuilt, put children back into schools and restarted businesses at very different rates over a two-year period. Two years after the disasters some communities seemed trapped in amber, struggling to restore even half of their utility service, operating businesses and clean streets. Other cities had managed to rebound completely, placing evacuees in temporary homes, restoring gas and water lines, and clearing debris.

To understand why some cities were struggling, I looked into explanations including the impact of the disaster, the size of the city, financial independence, horizontal ties between cities, and vertical ties from the community to power brokers in Tokyo. In this phase of the recovery, vertical ties were the best predictor of strong recoveries.

Communities that had sent more powerful senior representatives to Tokyo in the years before the disaster did the best. These politicians and local ambassadors helped to push the bureaucracy to send aid, reach out to foreign governments for assistance, and smooth the complex zoning and bureaucratic impediments to recovery…

As communities around the world face disasters more and more frequently, I hope that my research on Japan after 3.11 can provide guidance to residents facing challenges. While physical infrastructure is important for mitigating disaster, communities should also invest time and effort in building social ties….(More)”

Why big data may be having a big effect on how our politics plays out


 in The Conversation: “…big data… is an inconceivably vast mass of information, which at first glance would seem a giant mess; just white noise.

Unless you know how to decipher it.

According to a story first published in Zurich-based Das Magazin in December and more recently taken up by Motherboard, events such as Brexit and Trump’s ascendency may have been made possible through just such deciphering. The argument is that technology combining psychological profiling and data analysis may have played a pivotal part in exploiting unconscious bias at the individual voter level. The theory is this was used in the recent US election to increase or suppress votes to benefit particular candidates in crucial locations. It is claimed that the company behind this may be active in numerous countries.

The technology at play is based on the integration of a model of psychological profiling known as OCEAN. This uses the details contained within individuals’ digital footprints to create user-specific profiles. These map to the level of the individual, identifiable voter, who can then be manipulated by exploiting beliefs, preferences and biases that they might not even be aware of, but which their data has revealed about them in glorious detail.

As well as enabling the creation of tailored media content, this can also be used to create scripts of relevant talking points for campaign doorknockers to focus on, according to the address and identity of the householder to whom they are speaking.

This goes well beyond the scope and detail of previous campaign strategies. If the theory about the role of these techniques is correct, it signals a new landscape of political strategising. An active researcher in the field, when writing about the company behind this technology (which Trump paid for services during his election campaign), described the potential scale of such technologies:

Marketers have long tailored their placement of advertisements based on their target group, for example by placing ads aimed at conservative consumers in magazines read by conservative audiences. What is new about the psychological targeting methods implemented by Cambridge Analytica, however, is their precision and scale. According to CEO Alexander Nix, the company holds detailed psycho-demographic profiles of more than 220 million US citizens and used over 175,000 different ad messages to meet the unique motivations of their recipients….(More)”

RideComfort: A Development of Crowdsourcing Smartphones in Measuring Train Ride Quality


Adam Azzoug and Sakdirat Kaewunruen in Frontiers in Built Environment: “Among the many million train journeys taking place every day, not all of them are being measured or monitored for ride comfort. Improving ride comfort is important for railway companies to attract more passengers to their train services. Giving passengers the ability to measure ride comfort themselves using their smartphones allows railway companies to receive instant feedback from passengers regarding the ride quality on their trains. The purpose of this development is to investigate the feasibility of using smartphones to measure vibration-based ride comfort on trains. This can be accomplished by developing a smartphone application, analyzing the data recorded by the application, and verifying the data by comparing it to data from a track inspection vehicle or an accelerometer. A literature review was undertaken to examine the commonly used standards to evaluate ride comfort, such as the BS ISO 2631-1:1997 standard and Sperling’s ride index as proposed by Sperling and Betzhold (1956). The literature review has also revealed some physical causes of ride discomfort such as vibrations induced by roughness and irregularities present at the wheel/rail interface. We are the first to use artificial neural networks to map data derived from smartphones in order to evaluate ride quality. Our work demonstrates the merits of using smartphones to measure ride comfort aboard trains and suggests recommendations for future technological improvement. Our data argue that the accelerometers found in modern smartphones are of sufficient quality to be used in evaluating ride comfort. The ride comfort levels predicted both by BS ISO 2631-1 and Sperling’s index exhibit excellent agreement…(More)”

Unconscious gender bias in the Google algorithm


Interview in Metode with Londa Schiebinger, director of Gendered Innovations: “We were interested, because the methods of sex and gender analysis are not in the university curriculum, yet it is very important. The first thing our group did was to develop those methods and we present twelve methods on the website. We knew it would be very important to create case studies or concrete examples where sex and gender analysis added something new to the research. One of my favorite examples is machine translation. If you look at Google Translate, which is the main one in the United States – SYSTRAN is the main one in Europe – we found that it defaults the masculine pronoun. So does SYSTRAN. If I put an article about myself into Google Translate, it defaults to «he said» instead of «she said». So, in an article of one of my visits to Spain, it defaults to «he thinks, he says…» and, occasionally, «it wrote». We wondered why this happened and we found out, because Google Translate works on an algorithm, the problem is that «he said» appears on the web four times more than «she said», so the machine gets it right if it chooses «he said». Because the algorithm is just set up for that. But, anyway, we found that there was a huge change in English language from 1968 to the current time, and the proportion of «he said» and «she said» changed from 4-to-1 to 2-to-1. But, still, the translation does not take this into account. So we went to Google and we said «Hey, what is going on?» and they said «Oh, wow, we didn’t know, we had no idea!». So what we recognized is that there is an unconscious gender bias in the Google algorithm. They did not intend to do this at all, so now there are a lot of people who are trying to fix it….

How can you fix that?

Oh, well, this is the thing! …I think algorithms in general are a problem because if there is any kind of unconscious bias in the data, the algorithm just returns that to you. So even though Google has policies, company policies, to support gender equality, they had an unconscious bias in their product and they do not mean to. Now that they know about it, they can try to fix it….(More)”

Rules for a Flat World – Why Humans Invented Law and How to Reinvent It for a Complex Global Economy


Book by Gillian Hadfield: “… picks up where New York Times columnist Thomas Friedman left off in his influential 2005 book, The World is Flat. Friedman was focused on the infrastructure of communications and technology-the new web-based platform that allows business to follow the hunt for lower costs, higher value and greater efficiency around the planet seemingly oblivious to the boundaries of nation states. Hadfield peels back this technological platform to look at the ‘structure that lies beneath’—our legal infrastructure, the platform of rules about who can do what, when and how. Often taken for granted, economic growth throughout human history has depended at least as much on the evolution of new systems of rules to support ever-more complex modes of cooperation and trade as it has on technological innovation. When Google rolled out YouTube in over one hundred countries around the globe simultaneously, for example, it faced not only the challenges of technology but also the staggering problem of how to build success in the context of a bewildering and often conflicting patchwork of nation-state-based laws and legal systems affecting every aspect of the business-contract, copyright, encryption, censorship, advertising and more. Google is not alone. A study presented at the World Economic Forum in Davos in 2011 found that for global firms, the number one challenge of the modern economy is increasing complexity, and the number one source of complexity is law. Today, even our startups, the engines of economic growth, are global from Day One.

Put simply, the law and legal methods on which we currently rely have failed to evolve along with technology. They are increasingly unable to cope with the speed, complexity, and constant border-crossing of our new globally inter-connected environment. Our current legal systems are still rooted in the politics-based nation state platform on which the industrial revolution was built. Hadfield argues that even though these systems supported fantastic growth over the past two centuries, today they are too slow, costly, cumbersome and localized to support the exponential rise in economic complexity they fostered. …

The answer to our troubles with law, however, is not the one critics usually reach for—to have less of it. Recognizing that law provides critical infrastructure for the cooperation and collaboration on which economic growth is built is the first step, Hadfield argues, to building a legal environment that does more of what we need it to do and less of what we don’t. …(More)”

What Communication Can Contribute to Data Studies: Three Lenses on Communication and Data


Andrew Schrock at the International Journal of Communication: “We are awash in predictions about our data-driven future. Enthusiasts believe big data imposes new ways of knowing, while critics worry it will enable powerful regimes of institutional control. This debate has been of keen interest to communication scholars. To encourage conceptual clarity, this article draws on communication scholarship to suggest three lenses for data epistemologies. I review the common social scientific perspective of communication as data. A data as discourse lens interrogates the meanings that data carries. Communication around data describes moments where data are constructed. By employing multiple perspectives, we might understand how data operate as a complex structure of dominance….(More)”

Troopers Use ‘Big Data’ to Predict Crash Sites


Jenni Bergal at Pew Charitable Trusts: “As Tennessee Highway Patrol Sgt. Anthony Griffin patrolled an area near Murfreesboro one morning in January 2014, he gave a young woman a ticket for driving her Geo Prizm without wearing a seat belt.

About four hours later, Griffin was dispatched to help out at the scene of a major accident a few miles away. A car had veered off the road, sailed over a bridge, struck a utility pole and landed in a frozen pond. When Griffin went to question the driver, who appeared uninjured, he was shocked to find it was the same woman he had ticketed earlier.

She told him she had been wearing her seat belt only because he had given her a ticket. She believed it had saved her life. And if it hadn’t been for new crash prediction software his agency was using, Griffin said he wouldn’t have been in that spot to issue her the ticket.

“I’m in my 21st year of law enforcement and I’ve never come across anything where I could see the fruit of my work in this fashion,” said Griffin, who is now a lieutenant. “It was amazing.”

As more and more states use “big data” for everything from catching fraudsters to reducing heath care costs, some highway patrols are tapping it to predict where serious or fatal traffic accidents are likely to take place so they can try to prevent them….

Indiana State Police decided to take a different approach, and are making their predictive crash analytics program available to the public, as well as troopers.

A color-coded Daily Crash Prediction map, which went online in November, pulls together data that includes crash reports from every police agency in the state dating to 2004, daily traffic volume, historical weather information and the dates of major holidays, said First Sgt. Rob Simpson….(More)”