Open Data Exposed


Book by Bastiaan van Loenen, Glenn Vancauwenberghe, Joep Crompvoets and Lorenzo Dalla Corte: “This book is about open data, i.e. data that does not have any barriers in the (re)use. Open data aims to optimize access, sharing and using data from a technical, legal, financial, and intellectual perspective.

Data increasingly determines the way people live their lives today. Nowadays, we cannot imagine a life without real-time traffic information about our route to work, information of the daily news or information about the local weather. At the same time, citizens themselves now are constantly generating and sharing data and information via many different devices and social media systems. Especially for governments, collection, management, exchange, and use of data and information have always been key tasks, since data is both the primary input to and output of government activities. Also for businesses, non-profit organizations, researchers and various other actors, data and information are essential….(More)”.

Positive deviance, big data, and development: A systematic literature review


Paper by Basma Albanna and Richard Heeks: “Positive deviance is a growing approach in international development that identifies those within a population who are outperforming their peers in some way, eg, children in low‐income families who are well nourished when those around them are not. Analysing and then disseminating the behaviours and other factors underpinning positive deviance are demonstrably effective in delivering development results.

However, positive deviance faces a number of challenges that are restricting its diffusion. In this paper, using a systematic literature review, we analyse the current state of positive deviance and the potential for big data to address the challenges facing positive deviance. From this, we evaluate the promise of “big data‐based positive deviance”: This would analyse typical sources of big data in developing countries—mobile phone records, social media, remote sensing data, etc—to identify both positive deviants and the factors underpinning their superior performance.

While big data cannot solve all the challenges facing positive deviance as a development tool, they could reduce time, cost, and effort; identify positive deviants in new or better ways; and enable positive deviance to break out of its current preoccupation with public health into domains such as agriculture, education, and urban planning. In turn, positive deviance could provide a new and systematic basis for extracting real‐world development impacts from big data…(More)”.

Surveillance Studies: A Reader


Book edited by Torin Monahan and David Murakami Wood: “Surveillance is everywhere: in workplaces monitoring the performance of employees, social media sites tracking clicks and uploads, financial institutions logging transactions, advertisers amassing fine-grained data on customers, and security agencies siphoning up everyone’s telecommunications activities. Surveillance practices-although often hidden-have come to define the way modern institutions operate. Because of the growing awareness of the central role of surveillance in shaping power relations and knowledge across social and cultural contexts, scholars from many different academic disciplines have been drawn to “surveillance studies,” which in recent years has solidified as a major field of study.

Torin Monahan and David Murakami Wood’s Surveillance Studies is a broad-ranging reader that provides a comprehensive overview of the dynamic field. In fifteen sections, the book features selections from key historical and theoretical texts, samples of the best empirical research done on surveillance, introductions to debates about privacy and power, and cutting-edge treatments of art, film, and literature. While the disciplinary perspectives and foci of scholars in surveillance studies may be diverse, there is coherence and agreement about core concepts, ideas, and texts. This reader outlines these core dimensions and highlights various differences and tensions. In addition to a thorough introduction that maps the development of the field, the volume offers helpful editorial remarks for each section and brief prologues that frame the included excerpts. …(More)”.

When AI Misjudgment Is Not an Accident


Douglas Yeung at Scientific American: “The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.

But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.

According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions….(More)”.

This is how computers “predict the future”


Dan Kopf at Quartz: “The poetically named “random forest” is one of data science’s most-loved prediction algorithms. Developed primarily by statistician Leo Breiman in the 1990s, the random forest is cherished for its simplicity. Though it is not always the most accurate prediction method for a given problem, it holds a special place in machine learning because even those new to data science can implement and understand this powerful algorithm.

This was the algorithm used in an exciting 2017 study on suicide predictions, conducted by biomedical-informatics specialist Colin Walsh of Vanderbilt University and psychologists Jessica Ribeiro and Joseph Franklin of Florida State University. Their goal was to take what they knew about a set of 5,000 patients with a history of self-injury, and see if they could use those data to predict the likelihood that those patients would commit suicide. The study was done retrospectively. Sadly, almost 2,000 of these patients had killed themselves by the time the research was underway.

Altogether, the researchers had over 1,300 different characteristics they could use to make their predictions, including age, gender, and various aspects of the individuals’ medical histories. If the predictions from the algorithm proved to be accurate, the algorithm could theoretically be used in the future to identify people at high risk of suicide, and deliver targeted programs to them. That would be a very good thing.

Predictive algorithms are everywhere. In an age when data are plentiful and computing power is mighty and cheap, data scientists increasingly take information on people, companies, and markets—whether given willingly or harvested surreptitiously—and use it to guess the future. Algorithms predict what movie we might want to watch next, which stocks will increase in value, and which advertisement we’re most likely to respond to on social media. Artificial-intelligence tools, like those used for self-driving cars, often rely on predictive algorithms for decision making….(More)”.

The biggest pandemic risk? Viral misinformation


Heidi J. Larson at Nature: “A hundred years ago this month, the death rate from the 1918 influenza was at its peak. An estimated 500 million people were infected over the course of the pandemic; between 50 million and 100 million died, around 3% of the global population at the time.

A century on, advances in vaccines have made massive outbreaks of flu — and measles, rubella, diphtheria and polio — rare. But people still discount their risks of disease. Few realize that flu and its complications caused an estimated 80,000 deaths in the United States alone this past winter, mainly in the elderly and infirm. Of the 183 children whose deaths were confirmed as flu-related, 80% had not been vaccinated that season, according to the US Centers for Disease Control and Prevention.

I predict that the next major outbreak — whether of a highly fatal strain of influenza or something else — will not be due to a lack of preventive technologies. Instead, emotional contagion, digitally enabled, could erode trust in vaccines so much as to render them moot. The deluge of conflicting information, misinformation and manipulated information on social media should be recognized as a global public-health threat.

So, what is to be done? The Vaccine Confidence Project, which I direct, works to detect early signals of rumours and scares about vaccines, and so to address them before they snowball. The international team comprises experts in anthropology, epidemiology, statistics, political science and more. We monitor news and social media, and we survey attitudes. We have also developed a Vaccine Confidence Index, similar to a consumer-confidence index, to track attitudes.

Emotions around vaccines are volatile, making vigilance and monitoring crucial for effective public outreach. In 2016, our project identified Europe as the region with the highest scepticism around vaccine safety (H. J. Larson et al. EBioMedicine 12, 295–301; 2016). The European Union commissioned us to re-run the survey this summer; results will be released this month. In the Philippines, confidence in vaccine safety dropped from 82% in 2015 to 21% in 2018 (H. J. Larson et al. Hum. Vaccines Immunother. https://doi.org/10.1080/21645515.2018.1522468; 2018), after legitimate concerns arose about new dengue vaccines. Immunization rates for established vaccines for tetanus, polio, tetanus and more also plummeted.

We have found that it is useful to categorize misinformation into several levels….(More).

The future’s so bright, I gotta wear blinders


Nicholas Carr’s blog: “A few years ago, the technology critic Michael Sacasas introduced the term “Borg Complex” to describe the attitude and rhetoric of modern-day utopians who believe that computer technology is an unstoppable force for good and that anyone who resists or even looks critically at the expanding hegemony of the digital is a benighted fool. (The Borg is an alien race in Star Trekthat sucks up the minds of other races, telling its victims that “resistance is futile.”) Those afflicted with the complex, Sacasas observed, rely on a a set of largely specious assertions to dismiss concerns about any ill effects of technological progress. The Borgers are quick, for example, to make grandiose claims about the coming benefits of new technologies (remember MOOCs?) while dismissing past cultural achievements with contempt (“I don’t really give a shit if literary novels go away”).

To Sacasas’s list of such obfuscating rhetorical devices, I would add the assertion that we are “only at the beginning.” By perpetually refreshing the illusion that progress is just getting under way, gadget worshippers like Kelly are able to wave away the problems that progress is causing. Any ill effect can be explained, and dismissed, as just a temporary bug in the system, which will soon be fixed by our benevolent engineers. (If you look at Mark Zuckerberg’s responses to Facebook’s problems over the years, you’ll find that they are all variations on this theme.) Any attempt to put constraints on technologists and technology companies becomes, in this view, a short-sighted and possibly disastrous obstruction of technology’s march toward a brighter future for everyone — what Kelly is still calling the “long boom.” You ain’t seen nothing yet, so stay out of our way and let us work our magic.

In his books Empire and Communication (1950) and The Bias of Communication (1951), the Canadian historian Harold Innis argued that all communication systems incorporate biases, which shape how people communicate and hence how they think. These biases can, in the long run, exert a profound influence over the organization of society and the course of history. “Bias,” it seems to me, is exactly the right word. The media we use to communicate push us to communicate in certain ways, reflecting, among other things, the workings of the underlying technologies and the financial and political interests of the businesses or governments that promulgate the technologies. (For a simple but important example, think of the way personal correspondence has been changed by the shift from letters delivered through the mail to emails delivered via the internet to messages delivered through smartphones.) A bias is an inclination. Its effects are not inevitable, but they can be strong. To temper them requires awareness and, yes, resistance.

For much of this year, I’ve been exploring the biases of digital media, trying to trace the pressures that the media exert on us as individuals and as a society. I’m far from done, but it’s clear to me that the biases exist and that at this point they have manifested themselves in unmistakable ways. Not only are we well beyond the beginning, but we can see where we’re heading — and where we’ll continue to head if we don’t consciously adjust our course….(More)”.

Challenges facing social media platforms in conflict prevention in Kenya since 2007: A case of Ushahidi platform


Paper by A.K. Njeru, B. Malakwen and M. Lumala in the International Academic Journal of Social Sciences and Education: “Throughout history information is a key factor in conflict management around the world. The media can play its important role of being the society’s watch dog of the society, by exposing to the masses what is essential but hidden, however the same media may also be used to mobilize masses to violence. Social media can therefore act as a tool for widening the democratic space, but can also lead to destabilization of peace.

The aim of the study was to establish the challenges facing social media platforms in conflict prevention in Kenya since 2007: a case of Ushahidi platform in Kenya. The paradigm that was found suitable for this study is Pragmatism. The study used a mixed approach. In this study, interviews, focus group discussions and content analysis of the Ushahidi platform were chosen as the tools of data collection. In order to bring order, structure and interpretation to the collected data, the researcher systematically organized the data by coding it into categories and constructing matrixes. After classifying the data, the researcher compared and contrasted it to the information retrieved from the literature review.

The study found that One major weak point social media as a tool for conflict prevention is the lack of ethical standards and professionalism for the users. It is too liberal and thus can be used to spread unverified information and distorted facts that might be detrimental to peace building and conflict prevention. This has led to some of the users already questioning the credibility of the information that is circulated through social media. The other weak point about social media as tool for peace building is that it is dependent to a major extent on the access to internet. The availability of internet in low units doesn’t necessarily mean cheap access. So over time the high cost of internet might affect the efficiency of the social media as a tool. The study concluded that information credibility is essential if social media as a tool is to be effective in conflict prevention and peace building.

The nature of social media which allows for anonymity of identity gives room for unverified information to be floated around the social media networks; this can be detrimental to the conflict prevention and peace building initiatives. There is therefore need for information verification and authentication by a trusted agent, to offer information appertaining to violence, conflict prevention and peace building on the social media platforms. The study recommends that Ushahidi platform should be seen as an agent of social change and should discuss the social mobilization which may be able to bring about. The study further suggest that if we can look at Ushahidi platform as a development agent, can we then take this a step further and ask, or try to find, a methodology that looks at the Ushahidi platform as peacemaking agent, or to assist in the maintenance of peace in a post-conflict thereby tapping into Ushahidi platform’s full potential….(More)”.

‘Do Not Track,’ the Privacy Tool Used by Millions of People, Doesn’t Do Anything


Kashmir Hill at Gizmodo: “When you go into the privacy settings on your browser, there’s a little option there to turn on the “Do Not Track” function, which will send an invisible request on your behalf to all the websites you visit telling them not to track you. A reasonable person might think that enabling it will stop a porn site from keeping track of what she watches, or keep Facebook from collecting the addresses of all the places she visits on the internet, or prevent third-party trackers she’s never heard of from following her from site to site. According to a recent survey by Forrester Research, a quarter of American adults use “Do Not Track” to protect their privacy. (Our own stats at Gizmodo Media Group show that 9% of visitors have it turned on.) We’ve got bad news for those millions of privacy-minded people, though: “Do Not Track” is like spray-on sunscreen, a product that makes you feel safe while doing little to actually protect you.

“Do Not Track,” as it was first imagined a decade ago by consumer advocates, was going to be a “Do Not Call” list for the internet, helping to free people from annoying targeted ads and creepy data collection. But only a handful of sites respect the request, the most prominent of which are Pinterest and Medium. (Pinterest won’t use offsite data to target ads to a visitor who’s elected not to be tracked, while Medium won’t send their data to third parties.) The vast majority of sites, including this one, ignore it….(More)”.

How pro-trust initiatives are taking over the Internet


Sara Fisher at Axios: “Dozens of new initiatives have launched over the past few years to address fake news and the erosion of faith in the media, creating a measurement problem of its own.

Why it matters: So many new efforts are launching simultaneously to solve the same problem that it’s become difficult to track which ones do what and which ones are partnering with each other….

To name a few:

  • The Trust Project, which is made up of dozens of global news companies, announced this morning that the number of journalism organizations using the global network’s “Trust Indicators” now totals 120, making it one of the larger global initiatives to combat fake news. Some of these groups (like NewsGuard) work with Trust Project and are a part of it.
  • News Integrity Initiative (Facebook, Craig Newmark Philanthropic Fund, Ford Foundation, Democracy Fund, John S. and James L. Knight Foundation, Tow Foundation, AppNexus, Mozilla and Betaworks)
  • NewsGuard (Longtime journalists and media entrepreneurs Steven Brill and Gordon Crovitz)
  • The Journalism Trust Initiative (Reporters Without Borders, and Agence France Presse, the European Broadcasting Union and the Global Editors Network )
  • Internews (Longtime international non-profit)
  • Accountability Journalism Program (American Press Institute)
  • Trusting News (Reynolds Journalism Institute)
  • Media Manipulation Initiative (Data & Society)
  • Deepnews.ai (Frédéric Filloux)
  • Trust & News Initiative (Knight Foundation, Facebook and Craig Newmark in. affiliation with Duke University)
  • Our.News (Independently run)
  • WikiTribune (Wikipedia founder Jimmy Wales)

There are also dozens of fact-checking efforts being championed by different third-parties, as well as efforts being built around blockchain and artificial intelligence.

Between the lines: Most of these efforts include some sort of mechanism for allowing readers to physically discern real journalism from fake news via some sort of badge or watermark, but that presents problems as well.

  • Attempts to flag or call out news as being real and valid have in the past been rejected even further by those who wish to discredit vetted media.
  • For example, Facebook said in December that it will no longer use “Disputed Flags” — red flags next to fake news articles — to identify fake news for users, because it found that “putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended.”…(More)”.