Social Media and the Internet of Things towards Data-Driven Policymaking in the Arab World: Potential, Limits and Concerns


Paper by Fadi Salem: “The influence of social media has continued to grow globally over the past decade. During 2016 social media played a highly influential role in what has been described as a “post truth” era in policymaking, diplomacy and political communication. For example, social media “bots” arguably played a key role in influencing public opinion globally, whether on the political or public policy levels. Such practices rely heavily on big data analytics, artificial intelligence and machine learning algorithms, not just in gathering and crunching public views and sentiments, but more so in pro-actively influencing public opinions, decisions and behaviors. Some of these government practices undermined traditional information mediums, triggered foreign policy crises, impacted political communication and disrupted established policy formulation cycles.

On the other hand, the digital revolution has expanded the horizon of possibilities for development, governance and policymaking. A new disruptive transformation is characterized by a fusion of inter-connected technologies where the digital, physical and biological worlds converge. This inter-connectivity is generating — and consuming — an enormous amount of data that is changing the ways policies are conducted, decisions are taken and day-to-day operations are carried out. Within this context, ‘big data’ applications are increasingly becoming critical elements of policymaking. Coupled with the rise of a critical mass of social media users globally, this ubiquitous connectivity and data revolution is promising major transformations in modes of governance, policymaking and citizen-government interaction.

In the Arab region, observations from public sector and decision-making organization suggest that there is limited understanding of the real potential, the limitations, and the public concerns surrounding these big data sources in the Arab region. This report contextualizes the findings in light of the socio-technical transformations taking place in the Arab region, by exploring the growth of social media and building on past editions in the series. The objective is to explore and assess multiple aspects of the ongoing digital transformation in the Arab world and highlight some of the policy implications on a regional level. More specifically, the report aims to better inform our understanding of the convergence of social media and IoT data as sources of big data and their potential impact on policymaking and governance in the region. Ultimately, in light of the availability of massive amount of data from physical objects and people, the questions tackled in the research are: What is the potential for data-driven policymaking and governance in the region? What are the limitations? And most importantly, what are the public concerns that need to be addressed by policymakers while they embark on next phase of the digital governance transformation in the region?

In the Arab region, there are already numerous experiments and applications where data from social media and the “Internet of Things” (IoT) are informing and influencing government practices as sources of big data, effectively changing how societies and governments interact. The report has two main parts. In the first part, we explore the questions discussed in the previous paragraphs through a regional survey spanning the 22 Arab countries. In the second part, it explores growth and usage trends of influential social media platforms across the region, including Facebook, Twitter, Linkedin and, for the first time, Instagram. The findings highlight important changes — and some stagnation — in the ways social media is infiltrating demographic layers in Arab societies, be it gender, age and language. Together, the findings provide important insights for guiding policymakers, business leaders and development efforts. More specifically, these findings can contribute to shaping directions and informing decisions on the future of governance and development in the Arab region….(More)”

Billboard coughs when it detects cigarette smoke


Springwise: “The World Health Organization reports that tobacco use kills approximately six million people each year. And despite having one of the lowest smoking rates in Europe, Sweden’s Apotek Hjartat pharmacy is running a quit smoking campaign to help smokers make good on New Year resolutions. Located in Stockholm’s busy Odenplan square, the campaign billboard features a black and white image of a man.

When the integrated smoke detector identifies smoke, the man in the billboard image comes to life, emitting a sharp, hacking cough. So far, reactions from smokers have been mixed, with non-smokers and smokers alike appreciating the novelty and surprise of the billboard.

Apotek Hjartat is not new to Springwise, having been featured last year with its virtual reality pain relief app. Pharmacies appear to be taking their role of providing a positive public service seriously, with one in New York charging a man tax to highlight the persistent gender wage gap….(More)”

Can artificial intelligence wipe out bias unconscious bias from your workplace?


Lydia Dishman at Fast Company: “Unconscious bias is exactly what it sounds like: The associations we make whenever we face a decision are buried so deep (literally—the gland responsible for this, the amygdala, is surrounded by the brain’s gray matter) that we’re as unaware of them as we are of having to breathe.

So it’s not much of a surprise that Ilit Raz, cofounder and CEO of Joonko, a new application that acts as diversity “coach” powered by artificial intelligence, wasn’t even aware at first of the unconscious bias she was facing as a woman in the course of a normal workday. Raz’s experience coming to grips with that informs the way she and her cofounders designed Joonko to work.

The tool joins a crowded field of AI-driven solutions for the workplace, but most of what’s on the market is meant to root out bias in recruiting and hiring. Joonko, by contrast, is setting its sights on illuminating unconscious bias in the types of workplace experiences where few people even think to look for it….

so far, a lot of these resources have been focused on addressing the hiring process. An integral part of the problem, after all, is getting enough diverse candidates in the recruiting pipeline so they can be considered for jobs. Apps like Blendoor hide a candidate’s name, age, employment history, criminal background, and even their photo so employers can focus on qualifications. Interviewing.io’s platform even masks applicants’ voices. Text.io uses AI to parse communications in order to make job postings more gender-neutral. Unitive’s technology also focuses on hiring, with software designed to detect unconscious bias in Applicant Tracking Systems that read resumes and decide which ones to keep or scrap based on certain keywords.

But as Intel recently discovered, hiring diverse talent doesn’t always mean they’ll stick around. And while one 2014 estimate by Margaret Regan, head of the global diversity consultancy FutureWork Institute, found that 20% of large U.S. employers with diversity programs now provide unconscious-bias training—a number that could reach 50% by next year—that training doesn’t always work as intended. The reasons why vary, from companies putting programs on autopilot and expecting them to run themselves, to the simple fact that many employees who are trained ultimately forget what they learned a few days later.

Joonko doesn’t solve these problems. “We didn’t even start with recruiting,” Raz admits. “We started with task management.” She explains that when a company finally hires a diverse candidate, it needs to understand that the best way to retain them is to make sure they feel included and are given the same opportunities as everyone else. That’s where Joonko sees an opening…(More)”.

Discrimination by algorithm: scientists devise test to detect AI bias


 at the Guardian: “There was the voice recognition software that struggled to understand women, the crime prediction algorithm that targeted black neighbourhoods and the online ad platform which was more likely to show men highly paid executive jobs.

Concerns have been growing about AI’s so-called “white guy problem” and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making.

Mortiz Hardt, a senior research scientist at Google and a co-author of the paper, said: “Decisions based on machine learning can be both incredibly useful and have a profound impact on our lives … Despite the need, a vetted methodology in machine learning for preventing this kind of discrimination based on sensitive attributes has been lacking.”

The paper was one of several on detecting discrimination by algorithms to be presented at the Neural Information Processing Systems (NIPS) conference in Barcelona this month, indicating a growing recognition of the problem.

Nathan Srebro, a computer scientist at the Toyota Technological Institute at Chicago and co-author, said: “We are trying to enforce that you will not have inappropriate bias in the statistical prediction.”

The test is aimed at machine learning programs, which learn to make predictions about the future by crunching through vast quantities of existing data. Since the decision-making criteria are essentially learnt by the computer, rather than being pre-programmed by humans, the exact logic behind decisions is often opaque, even to the scientists who wrote the software….“Our criteria does not look at the innards of the learning algorithm,” said Srebro. “It just looks at the predictions it makes.”

Their approach, called Equality of Opportunity in Supervised Learning, works on the basic principle that when an algorithm makes a decision about an individual – be it to show them an online ad or award them parole – the decision should not reveal anything about the individual’s race or gender beyond what might be gleaned from the data itself.

For instance, if men were on average twice as likely to default on bank loans than women, and if you knew that a particular individual in a dataset had defaulted on a loan, you could reasonably conclude they were more likely (but not certain) to be male.

However, if an algorithm calculated that the most profitable strategy for a lender was to reject all loan applications from men and accept all female applications, the decision would precisely confirm a person’s gender.

“This can be interpreted as inappropriate discrimination,” said Srebro….(More)”.

Social Movements and World-System Transformation


Book edited by Jackie Smith, Michael Goodhart, Patrick Manning, and John Markoff: “At a particularly urgent world-historical moment, this volume brings together some of the leading researchers of social movements and global social change and other emerging scholars and practitioners to advance new thinking about social movements and global transformation. Social movements around the world today are responding to crisis by defying both political and epistemological borders, offering alternatives to the global capitalist order that are imperceptible through the modernist lens. Informed by a world-historical perspective, contributors explain today’s struggles as building upon the experiences of the past while also coming together globally in ways that are inspiring innovation and consolidating new thinking about what a fundamentally different, more equitable, just, and sustainable world order might look like.

This collection offers new insights into contemporary movements for global justice, challenging readers to appreciate how modernist thinking both colors our own observations and complicates the work of activists seeking to resolve inequities and contradictions that are deeply embedded in Western cultural traditions and institutions. Contributors consider today’s movements in the longue durée—that is, they ask how Occupy Wall Street, the Arab Spring, and other contemporary struggles for liberation reflect, build upon, or diverge from anti-colonial and other emancipatory struggles of the past. Critical to this volume is its exploration of how divisions over gender equity and diversity of national cultures and class have impacted what are increasingly intersectional global movements. The contributions of feminist and indigenous movements come to the fore in this collective exploration of what the movements of yesterday and today can contribute to our ongoing effort to understand the dynamics of global transformation in order to help advance a more equitable, just, and ecologically sustainable world….(More)”.

What does Big Data mean to public affairs research?


Ines Mergel, R. Karl Rethemeyer, and Kimberley R. Isett at LSE’s The Impact Blog: “…Big Data promises access to vast amounts of real-time information from public and private sources that should allow insights into behavioral preferences, policy options, and methods for public service improvement. In the private sector, marketing preferences can be aligned with customer insights gleaned from Big Data. In the public sector however, government agencies are less responsive and agile in their real-time interactions by design – instead using time for deliberation to respond to broader public goods. The responsiveness Big Data promises is a virtue in the private sector but could be a vice in the public.

Moreover, we raise several important concerns with respect to relying on Big Data as a decision and policymaking tool. While in the abstract Big Data is comprehensive and complete, in practice today’sversion of Big Data has several features that should give public sector practitioners and scholars pause. First, most of what we think of as Big Data is really ‘digital exhaust’ – that is, data collected for purposes other than public sector operations or research. Data sets that might be publicly available from social networking sites such as Facebook or Twitter were designed for purely technical reasons. The degree to which this data lines up conceptually and operationally with public sector questions is purely coincidental. Use of digital exhaust for purposes not previously envisioned can go awry. A good example is Google’s attempt to predict the flu based on search terms.

Second, we believe there are ethical issues that may arise when researchers use data that was created as a byproduct of citizens’ interactions with each other or with a government social media account. Citizens are not able to understand or control how their data is used and have not given consent for storage and re-use of their data. We believe that research institutions need to examine their institutional review board processes to help researchers and their subjects understand important privacy issues that may arise. Too often it is possible to infer individual-level insights about private citizens from a combination of data points and thus predict their behaviors or choices.

Lastly, Big Data can only represent those that spend some part of their life online. Yet we know that certain segments of society opt in to life online (by using social media or network-connected devices), opt out (either knowingly or passively), or lack the resources to participate at all. The demography of the internet matters. For instance, researchers tend to use Twitter data because its API allows data collection for research purposes, but many forget that Twitter users are not representative of the overall population. Instead, as a recent Pew Social Media 2016 update shows, only 24% of all online adults use Twitter. Internet participation generally is biased in terms of age, educational attainment, and income – all of which correlate with gender, race, and ethnicity. We believe therefore that predictive insights are potentially biased toward certain parts of the population, making generalisations highly problematic at this time….(More)”

Microsoft Shows Searches Can Boost Early Detection of Lung Cancer


Dina Bass at BloombergTech: “Microsoft Corp. researchers want to give patients and doctors a new tool in the quest to find cancers earlier: web searches.

Lung cancer can be detected a year prior to current methods of diagnosis in more than one-third of cases by analyzing a patient’s internet searches for symptoms and demographic data that put them at higher risk, according to research from Microsoft published Thursday in the journal JAMA Oncology. The study shows it’s possible to use search data to give patients or doctors enough reason to seek cancer screenings earlier, improving the prospects for treatment for lung cancer, which is the leading cause of cancer deaths worldwide.

To train their algorithms, researchers Ryen White and Eric Horvitz scanned anonymous queries in Bing, the company’s search engine. They took searchers who had asked Bing something that indicated a recent lung cancer diagnosis, such as questions about specific treatments or the phrase “I was just diagnosed with lung cancer.”
Then they went back over the user’s previous searches to see if there were other queries that might have indicated the possibility of cancer prior to diagnosis. They looked for searches such as those related to symptoms, including bronchitis, chest pain and blood in sputum. The researchers reviewed other risk factors such as gender, age, race and whether searchers lived in areas with high levels of asbestos and radon, both of which increase the risk of lung cancer. And they looked for indications the user was a smoker, such as people searching for smoking cessation products like Nicorette gum.

How effective this method can be depends on how many false positives — people who don’t end up having cancer but are told they may — you are willing to tolerate, the researchers said. More false positives also mean catching more cases early. With one false positive in 1,000, 39 percent of cases can be caught a year earlier, according to the study. Dropping to one false positive per 100,000 still could allow researchers to catch 3 percent of cases a year earlier, Horvitz said.  The company published similar research on pancreatic cancer in June….(More)”

AI Ethics: The Future of Humanity 


Report by sparks & honey: “Through our interaction with machines, we develop emotional, human expectations of them. Alexa, for example, comes alive when we speak with it. AI is and will be a representation of its cultural context, the values and ethics we apply to one another as humans.

This machinery is eerily familiar as it mirrors us, and eventually becomes even smarter than us mere mortals. We’re programming its advantages based on how we see ourselves and the world around us, and we’re doing this at an incredible pace. This shift is pervading culture from our perceptions of beauty and aesthetics to how we interact with one another – and our AI.

Infused with technology, we’re asking: what does it mean to be human?

Our report examines:

• The evolution of our empathy from humans to animals and robots
• How we treat AI in its infancy like we do a child, allowing it space to grow
• The spectrum of our emotional comfort in a world embracing AI
• The cultural contexts fueling AI biases, such as gender stereotypes, that drive the direction of AI
• How we place an innate trust in machines, more than we do one another (Download for free)”

 

Obama Brought Silicon Valley to Washington


Jenna Wortham at The New York Times: “…“Fixing” problems with technology often just creates more problems, largely because technology is never developed in a neutral way: It embodies the values and biases of the people who create it. Crime-predicting software, celebrated when it was introduced in police departments around the country, turned out to reinforce discriminatory policing. Facebook was recently accused of suppressing conservative news from its trending topics. (The company denied a bias, but announced plans to train employees to neutralize political, racial, gender and age biases that could influence what it shows its user base.) Several studies have found that Airbnb has worsened the housing crises in some cities where it operates. In January, a report from the World Bank declared that tech companies were widening income inequality and wealth disparities, not improving them….

None of this was mentioned at South by South Lawn. Instead, speakers heralded the power of the tech community. John Lewis, the congressman and civil rights leader, gave a rousing talk that implored listeners to “get in trouble. Good trouble. Get in the way and make some noise.” Clay Dumas, chief of staff for the Office of Digital Strategy at the White House, told me in an email that the event could be considered part of a legacy to inspire social change and activism through technology. “In his final months in office,” he wrote, “President Obama wants to empower the generation of people that helped launch his candidacy and whose efforts carried him into office.”

…But a few days later, during a speech at Carnegie Mellon, Obama seemed to reckon with his feelings about the potential — and limits — of the tech world. The White House can’t be as freewheeling as a start-up, he said, because “by definition, democracy is messy. And part of government’s job is dealing with problems that nobody else wants to deal with.” But he added that he didn’t want people to become “discouraged and say, ‘I’m just not going to deal with government.’ ” Obama was the first American president to see technology as an engine to improve lives and accelerate society more quickly than any government body could. That lesson was apparent on the lawn. While I still don’t believe that technology is a panacea for society’s problems, I will always appreciate the first president who tried to bring what’s best about Silicon Valley to Washington, even if some of the bad came with it….(More)”

One Crucial Thing Can Help End Violence Against Girls


Eleanor Goldberg at The Huffington Post: “…There are statistics that demonstrate how many girls are in school, for example. But there’s a glaring lack of information on how many of them have dropped out ― and why ― concluded a new study, “Counting the Invisible Girls,” published this month by Plan International.

Why Data On Women And Girls Is Crucial

Without accurate information about the struggles girls face, such as abuse, child marriage, and dropout rates, governments and nonprofit groups can’t develop programs that cater to the specific needs of underserved girls. As a result, struggling girls across the globe, have little chance of escaping the problems that prevent them from pursuing an education and becoming economically independent.

“If data used for policy-making is incomplete, we have a real challenge. Current data is not telling the full story,” Emily Courey Pryor, senior director of Data2X, said at the Social Good Summit in New York City last month. Data2X is a U.N.-led group that works with data collectors and policymakers to identify gender data issues and to help bring about solutions.

Plan International released its report to coincide with a number of major recent events….

How Data Helps Improve The Lives Of Women And Girls 

While data isn’t a panacea, it has proven in a number of instances to help marginalized groups.

Until last year, it was legal in Guatemala for a girl to marry at age 14 ― despite the numerous health risks associated with the practice. Young brides are more vulnerable to sexual abuse and more likely to face fatal complications related to pregnancy and childbirth than those who marry later.

To urge lawmakers to raise the minimum age of marriage, Plan International partnered with advocates and civil society groups to launch its “Because I am a Girl” initiative. It analyzed traditional Mayan laws and gathered evidence about the prevalence of child marriage and its impact on children’s lives. The group presented the information before Guatemala’s Congress and in August of last year, the minimum age for marriage was raised to 18.

A number of groups are heeding the call to continue to amass better data.

In May, the Bill and Melinda Gates Foundation pledged $80 million over the next three years to gather robust and reliable data.

In September, the U.N. women announced “Making Every Woman and Girl Count,”a public-private partnership that’s working to tackle the data issue. The program was unveiled at the U.N. General Assembly, and is working with the Gates Foundation, Data2X and a number of world leaders…(More)”