How Mobile Network Operators Can Help Achieve the Sustainable Development Goals Profitably


Press Release: “Today, the Digital Impact Alliance (DIAL) released its second paper in a series focused on the promise of data for development (D4D). The paper, Leveraging Data for Development to Achieve Your Triple Bottom Line: Mobile Network Operators with Advanced Data for Good Capabilities See Stronger Impact to Profits, People and the Planet, will be presented at GSMA’s Mobile 360 Africa in Kigali.

“The mobile industry has already taken a driving seat in helping reach the Sustainable Development Goals by 2030 and this research reinforces the role mobile network operators in lower-income economies can play to leverage their network data for development and build a new data business safely and securely,” said Kate Wilson, CEO of the Digital Impact Alliance. “Mobile network operators (MNOs) hold unique data on customers’ locations and behaviors that can help development efforts. They have been reluctant to share data because there are inherent business risks and to do so has been expensive and time consuming.  DIAL’s research illustrates a path forward for MNOs on which data is useful to achieve the SDGs and why acting now is critical to building a long-term data business.”

DIAL worked with Altai Consulting on both primary and secondary research to inform this latest paper.  Primary research included one-on-one in-depth interviews with more than 50 executives across the data for development value chain, including government officials, civil society leaders, mobile network operators and other private sector representatives from both developed and emerging markets. These interviews help inform how operators can best tap into the shared value creation opportunities data for development provides.

Key findings from the in-depth interviews include:

  • There are several critical barriers that have prevented scaled use of mobile data for social good – including 1) unclear market opportunities, 2) not enough collaboration among MNOs, governments and non-profit stakeholders and 3) regulatory and privacy concerns;
  • While it may be an ideal time for MNOs to increase their involvement in D4D efforts given the unique data they have that can inform development, market shifts suggest the window of opportunity to implement large-scale D4D initiatives will likely not remain open for much longer;
  • Mobile Network Operators with advanced data for good capabilities will have the most success in establishing sustainable D4D efforts; and as a result, achieving triple bottom line mandates; and
  • Mobile Network Operators should focus on providing value-added insights and services rather than raw data and drive pricing and product innovation to meet the sector’s needs.

“Private sector data availability to drive public sector decision-making is a critical enabler for meeting SDG targets,” said Syed Raza, Senior Director of the Data for Development Team at the Digital Impact Alliance.  “Our data for development paper series aims to elevate the efforts of our industry colleagues with the information, insights and tools they need to help drive ethical innovation in this space….(More)”.

Let’s make private data into a public good


Article by Mariana Mazzucato: “The internet giants depend on our data. A new relationship between us and them could deliver real value to society….We should ask how the value of these companies has been created, how that value has been measured, and who benefits from it. If we go by national accounts, the contribution of internet platforms to national income (as measured, for example, by GDP) is represented by the advertisement-related services they sell. But does that make sense? It’s not clear that ads really contribute to the national product, let alone to social well-being—which should be the aim of economic activity. Measuring the value of a company like Google or Facebook by the number of ads it sells is consistent with standard neoclassical economics, which interprets any market-based transaction as signaling the production of some kind of output—in other words, no matter what the thing is, as long as a price is received, it must be valuable. But in the case of these internet companies, that’s misleading: if online giants contribute to social well-being, they do it through the services they provide to users, not through the accompanying advertisements.

This way we have of ascribing value to what the internet giants produce is completely confusing, and it’s generating a paradoxical result: their advertising activities are counted as a net contribution to national income, while the more valuable services they provide to users are not.

Let’s not forget that a large part of the technology and necessary data was created by all of us, and should thus belong to all of us. The underlying infrastructure that all these companies rely on was created collectively (via the tax dollars that built the internet), and it also feeds off network effects that are produced collectively. There is indeed no reason why the public’s data should not be owned by a public repository that sells the data to the tech giants, rather than vice versa. But the key issue here is not just sending a portion of the profits from data back to citizens but also allowing them to shape the digital economy in a way that satisfies public needs. Using big data and AI to improve the services provided by the welfare state—from health care to social housing—is just one example.

Only by thinking about digital platforms as collective creations can we construct a new model that offers something of real value, driven by public purpose. We’re never far from a media story that stirs up a debate about the need to regulate tech companies, which creates a sense that there’s a war between their interests and those of national governments. We need to move beyond this narrative. The digital economy must be subject to the needs of all sides; it’s a partnership of equals where regulators should have the confidence to be market shapers and value creators….(More)”.

Artificial Intelligence


Stanford Encyclopedia of Philosophy: “Artificial intelligence (AI) is the field devoted to building artificial animals (or at least artificial creatures that – in suitable contexts – appear to be animals) and, for many, artificial persons (or at least artificial creatures that – in suitable contexts – appear to be persons).[1] Such goals immediately ensure that AI is a discipline of considerable interest to many philosophers, and this has been confirmed (e.g.) by the energetic attempt, on the part of numerous philosophers, to show that these goals are in fact un/attainable. On the constructive side, many of the core formalisms and techniques used in AI come out of, and are indeed still much used and refined in, philosophy: first-order logic and its extensions; intensional logics suitable for the modeling of doxastic attitudes and deontic reasoning; inductive logic, probability theory, and probabilistic reasoning; practical reasoning and planning, and so on. In light of this, some philosophers conduct AI research and development as philosophy.

In the present entry, the history of AI is briefly recounted, proposed definitions of the field are discussed, and an overview of the field is provided. In addition, both philosophical AI (AI pursued as and out of philosophy) and philosophy of AI are discussed, via examples of both. The entry ends with some de rigueur speculative commentary regarding the future of AI….(More)”.

Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates


Marshall Allen at ProPublica: “With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”…(More)”.

‘Data is a fingerprint’: why you aren’t as anonymous as you think online


Olivia Solon at The Guardian: “In August 2016, the Australian government released an “anonymised” data set comprising the medical billing records, including every prescription and surgery, of 2.9 million people.

Names and other identifying features were removed from the records in an effort to protect individuals’ privacy, but a research team from the University of Melbourne soon discovered that it was simple to re-identify people, and learn about their entire medical history without their consent, by comparing the dataset to other publicly available information, such as reports of celebrities having babies or athletes having surgeries.

The government pulled the data from its website, but not before it had been downloaded 1,500 times.

This privacy nightmare is one of many examples of seemingly innocuous, “de-identified” pieces of information being reverse-engineered to expose people’s identities. And it’s only getting worse as people spend more of their lives online, sprinkling digital breadcrumbs that can be traced back to them to violate their privacy in ways they never expected.

Nameless New York taxi logs were compared with paparazzi shots at locations around the city to reveal that Bradley Cooper and Jessica Alba were bad tippers. In 2017 German researchers were able to identify people based on their “anonymous” web browsing patterns. This week University College London researchers showed how they could identify an individual Twitter user based on the metadata associated with their tweets, while the fitness tracking app Polar revealed the homes and in some cases names of soldiers and spies.

“It’s convenient to pretend it’s hard to re-identify people, but it’s easy. The kinds of things we did are the kinds of things that any first-year data science student could do,” said Vanessa Teague, one of the University of Melbourne researchers to reveal the flaws in the open health data.

One of the earliest examples of this type of privacy violation occurred in 1996 when the Massachusetts Group Insurance Commission released “anonymised” data showing the hospital visits of state employees. As with the Australian data, the state removed obvious identifiers like name, address and social security number. Then the governor, William Weld, assured the public that patients’ privacy was protected….(More)”.

How Charities Are Using Artificial Intelligence to Boost Impact


Nicole Wallace at the Chronicle of Philanthropy: “The chaos and confusion of conflict often separate family members fleeing for safety. The nonprofit Refunite uses advanced technology to help loved ones reconnect, sometimes across continents and after years of separation.

Refugees register with the service by providing basic information — their name, age, birthplace, clan and subclan, and so forth — along with similar facts about the people they’re trying to find. Powerful algorithms search for possible matches among the more than 1.1 million individuals in the Refunite system. The analytics are further refined using the more than 2,000 searches that the refugees themselves do daily.

The goal: find loved ones or those connected to them who might help in the hunt. Since Refunite introduced the first version of the system in 2010, it has helped more than 40,000 people reconnect.

One factor complicating the work: Cultures define family lineage differently. Refunite co-founder Christopher Mikkelsen confronted this problem when he asked a boy in a refugee camp if he knew where his mother was. “He asked me, ‘Well, what mother do you mean?’ ” Mikkelsen remembers. “And I went, ‘Uh-huh, this is going to be challenging.’ ”

Fortunately, artificial intelligence is well suited to learn and recognize different family patterns. But the technology struggles with some simple things like distinguishing the image of a chicken from that of a car. Mikkelsen believes refugees in camps could offset this weakness by tagging photographs — “car” or “not car” — to help train algorithms. Such work could earn them badly needed cash: The group hopes to set up a system that pays refugees for doing such work.

“To an American, earning $4 a day just isn’t viable as a living,” Mikkelsen says. “But to the global poor, getting an access point to earning this is revolutionizing.”

Another group, Wild Me, a nonprofit created by scientists and technologists, has created an open-source software platform that combines artificial intelligence and image recognition, to identify and track individual animals. Using the system, scientists can better estimate the number of endangered animals and follow them over large expanses without using invasive techniques….

To fight sex trafficking, police officers often go undercover and interact with people trying to buy sex online. Sadly, demand is high, and there are never enough officers.

Enter Seattle Against Slavery. The nonprofit’s tech-savvy volunteers created chatbots designed to disrupt sex trafficking significantly. Using input from trafficking survivors and law-enforcement agencies, the bots can conduct simultaneous conversations with hundreds of people, engaging them in multiple, drawn-out conversations, and arranging rendezvous that don’t materialize. The group hopes to frustrate buyers so much that they give up their hunt for sex online….

A Philadelphia charity is using machine learning to adapt its services to clients’ needs.

Benefits Data Trust helps people enroll for government-assistance programs like food stamps and Medicaid. Since 2005, the group has helped more than 650,000 people access $7 billion in aid.

The nonprofit has data-sharing agreements with jurisdictions to access more than 40 lists of people who likely qualify for government benefits but do not receive them. The charity contacts those who might be eligible and encourages them to call the Benefits Data Trust for help applying….(More)”.

Is Open Data Working for Women in Africa?


Web Foundation: “Open data has the potential to change politics, economies and societies for the better by giving people more opportunities to engage in the decisions that affect their lives. But to reach the full potential of open data, it must be available to and used by all. Yet, across the globe — and in Africa in particular — there is a significant data gap.

This report — Is open data working for women in Africa — maps the current state of open data for women across Africa, with insights from country-specific research in Nigeria, Cameroon, Uganda and South Africa with additional data from a survey of experts in 12 countries across the continent.

Our findings show that, despite the potential for open data to empower people, it has so far changed little for women living in Africa.

Key findings

  • There is a closed data culture in Africa — Most countries lack an open culture and have legislation and processes that are not gender-responsive. Institutional resistance to disclosing data means few countries have open data policies and initiatives at the national level. In addition, gender equality legislation and policies are incomplete and failing to reduce gender inequalities. And overall, Africa lacks the cross-organisational collaboration needed to strengthen the open data movement.
  • There are barriers preventing women from using the data that is available — Cultural and social realities create additional challenges for women to engage with data and participate in the technology sector. 1GB of mobile data in Africa costs, on average, 10% of average monthly income. This high cost keeps women, who generally earn less than men, offline. Moreover, time poverty, the gender pay gap and unpaid labour create economic obstacles for women to engage with digital technology.
  • Key datasets to support the advocacy objectives of women’s groups are missing — Data on budget, health and crime are largely absent as open data. Nearly all datasets in sub-Saharan Africa (373 out of 375) are closed, and sex-disaggregated data, when available online, is often not published as open data. There are few open data policies to support opening up of key datasets and even when they do exist, they largely remain in draft form. With little investment in open data initiatives, good data management practices or for implementing Right To Information (RTI) reforms, improvement is unlikely.
  • There is no strong base of research on women’s access and use of open data — There is lack of funding, little collaboration and few open data champions. Women’s groups, digital rights groups and gender experts rarely collaborate on open data and gender issues. To overcome this barrier, multi-stakeholder collaborations are essential to develop effective solutions….(More)”.

Big Data for the Greater Good


Book edited by Ali Emrouznejad and Vincent Charles: “This book highlights some of the most fascinating current uses, thought-provoking changes, and biggest challenges that Big Data means for our society. The explosive growth of data and advances in Big Data analytics have created a new frontier for innovation, competition, productivity, and well-being in almost every sector of our society, as well as a source of immense economic and societal value. From the derivation of customer feedback-based insights to fraud detection and preserving privacy; better medical treatments; agriculture and food management; and establishing low-voltage networks – many innovations for the greater good can stem from Big Data. Given the insights it provides, this book will be of interest to both researchers in the field of Big Data, and practitioners from various fields who intend to apply Big Data technologies to improve their strategic and operational decision-making processes….(More)”.

Activism in the Social Media Age


PewInternet: “This month marks the fifth anniversary of the #BlackLivesMatter hashtag, which was first coined following the acquittal of George Zimmerman in the shooting death of unarmed black teenager Trayvon Martin. In the course of those five years, #BlackLivesMatter has become an archetypal example of modern protests and political engagement on social media: A new Pew Research Center analysis of public tweets finds the hashtag has been used nearly 30 million times on Twitter – an average of 17,002 times per day – as of May 1, 2018.

Use of the #BlackLivesMatter hashtag on Twitter periodically spikes in response to major news events

The conversations surrounding this hashtag often center on issues related to race, violence and law enforcement, and its usage periodically surges surrounding real-world events – most prominently, during the police-related deaths of Alton Sterling and Philando Castile and the subsequent shooting of police officers in Dallas, Texas, and Baton Rouge, Louisiana, in July 2016.1

The rise of the #BlackLivesMatter hashtag – along with others like #MeToo and #MAGA (Make America Great Again) – has sparked a broader discussion about the effectiveness and viability of using social media for political engagement and social activism. To that end, a new survey by the Center finds that majorities of Americans do believe these sites are very or somewhat important for accomplishing a range of political goals, such as getting politicians to pay attention to issues (69% of Americans feel these platforms are important for this purpose) or creating sustained movements for social change (67%).

Certain groups of social media users – most notably, those who are black or Hispanic – view these platforms as an especially important tool for their own political engagement. For example, roughly half of black social media users say these platforms are at least somewhat personally important to them as a venue for expressing their political views or for getting involved with issues that are important to them. Those shares fall to around a third among white social media users.2

At the same time, the public as a whole expresses mixed views about the potential broader impact these sites might be having on political discourse and the nature of political activism. Some 64% of Americans feel that the statement “social media help give a voice to underrepresented groups” describes these sites very or somewhat well. But a larger share say social networking sites distract people from issues that are truly important (77% feel this way), and 71% agree with the assertion that “social media makes people believe they’re making a difference when they really aren’t.” Blacks and whites alike offer somewhat mixed assessments of the benefits and costs of activism on social media. But larger majorities of black Americans say these sites promote important issues or give voice to underrepresented groups, while smaller shares of blacks feel that political engagement on social media produces significant downsides in the form of a distracted public or “slacktivism.”…(More)”.

Data infrastructure literacy


Paper by Jonathan Gray, Carolin Gerlitz and Liliana Bounegru at Big Data & Society: “A recent report from the UN makes the case for “global data literacy” in order to realise the opportunities afforded by the “data revolution”. Here and in many other contexts, data literacy is characterised in terms of a combination of numerical, statistical and technical capacities. In this article, we argue for an expansion of the concept to include not just competencies in reading and working with datasets but also the ability to account for, intervene around and participate in the wider socio-technical infrastructures through which data is created, stored and analysed – which we call “data infrastructure literacy”. We illustrate this notion with examples of “inventive data practice” from previous and ongoing research on open data, online platforms, data journalism and data activism. Drawing on these perspectives, we argue that data literacy initiatives might cultivate sensibilities not only for data science but also for data sociology, data politics as well as wider public engagement with digital data infrastructures. The proposed notion of data infrastructure literacy is intended to make space for collective inquiry, experimentation, imagination and intervention around data in educational programmes and beyond, including how data infrastructures can be challenged, contested, reshaped and repurposed to align with interests and publics other than those originally intended….(More)”