Does protest really work in cosy democracies?


Steve Crawshaw at LSE Impact Blog: “…If it is possible for peaceful crowds to force the collapse of the Berlin Wall or to unseat a Mubarak, how easy it should it be for protesters to persuade a democratically elected leader to retreat from “mere” bad policy? In truth, not easy at all. Two million marched in the UK against the Iraq War in 2003 – and it made not a blind bit of difference with Tony Blair’s determination to proceed with a war that the UN Secretary-General described as illegal. Blair was re-elected, two years later.

After the inauguration of Donald Trump in January 2017, millions took part in the series of Women’s Marches in the United States and around the world. It seemed – it was – a powerful defining moment. And yet, at least in the short-term, those remarkable protests were water off the presidential duck’s back. His response was mockery. In some respects, Trump could afford to mock. A man who has received 63 million votes is in a stronger position than the unelected leader who has to threaten or use violence to stay in power.

And yet.

One thing that protest in an authoritarian and a democratic context have in common is that the impact of protest – including delayed impact – remains uncertain, both for those who protest and those who are protested against.

Vaclav Havel argued that it was worth “living in truth” – speaking truth to power – even without any certainty of outcome. “Those that say individuals are not capable of changing anything are only looking for excuses.” In that context, what is perhaps most unacceptable is to mock those who take risks, and seek change. Lord Charles Powell, former adviser to Margaret Thatcher, for example explained to the umbrella protesters in Hong Kong in 2013 that they were foolish and naive. They should, he told them, learn to live with the “small black cloud” of anti-democratic pressures from Beijing. The protesters failed to heed Powell’s complacent message. In the words of Joshua Wong, on his way back to jail earlier in 2017: “You can lock up our bodies, but not our minds.”

Scepticism and failure are linked, as the Egyptian activist Asmaa Mahfouz made clear in a powerful video which helped trigger the uprising in 2011. The 26-year-old declared: ‘”Whoever says it is not worth it because there will only be a handful or people, I want to tell him, “You are the reason for this.” Sitting at home and just watching us on the news or Facebook leads to our humiliation.’ The video went viral. Millions went out. The rest was history.

Even in a democracy, that same it-can’t-be-done logic sucks us in more often, perhaps, than we realize….(More)”.

Better Data for Better Policy: Accessing New Data Sources for Statistics Through Data Collaboratives


Medium Blog by Stefaan Verhulst: “We live in an increasingly quantified world, one where data is driving key business decisions. Data is claimed to be the new competitive advantage. Yet, paradoxically, even as our reliance on data increases and the call for agile, data-driven policy making becomes more pronounced, many Statistical Offices are confronted with shrinking budgets and an increased demand to adjust their practices to a data age. If Statistical Offices fail to find new ways to deliver “evidence of tomorrow”, by leveraging new data sources, this could mean that public policy may be formed without access to the full range of available and relevant intelligence — as most business leaders have. At worst, a thinning evidence base and lack of rigorous data foundation could lead to errors and more “fake news,” with possibly harmful public policy implications.

While my talk was focused on the key ways data can inform and ultimately transform the full policy cycle (see full presentation here), a key premise I examined was the need to access, utilize and find insight in the vast reams of data and data expertise that exist in private hands through the creation of new kinds of public and private partnerships or “data collaboratives” to establish more agile and data-driven policy making.

Screen Shot 2017-10-20 at 5.18.23 AM

Applied to statistics, such approaches have already shown promise in a number of settings and countries. Eurostat itself has, for instance, experimented together with Statistics Belgium, with leveraging call detail records provided by Proximus to document population density. Statistics Netherlands (CBS) recently launched a Center for Big Data Statistics (CBDS)in partnership with companies like Dell-EMC and Microsoft. Other National Statistics Offices (NSOs) are considering using scanner data for monitoring consumer prices (Austria); leveraging smart meter data (Canada); or using telecom data for complementing transportation statistics (Belgium). We are now living undeniably in an era of data. Much of this data is held by private corporations. The key task is thus to find a way of utilizing this data for the greater public good.

Value Proposition — and Challenges

There are several reasons to believe that public policy making and official statistics could indeed benefit from access to privately collected and held data. Among the value propositions:

  • Using private data can increase the scope and breadth and thus insights offered by available evidence for policymakers;
  • Using private data can increase the quality and credibility of existing data sets (for instance, by complementing or validating them);
  • Private data can increase the timeliness and thus relevance of often-outdated information held by statistical agencies (social media streams, for example, can provide real-time insights into public behavior); and
  • Private data can lower costs and increase other efficiencies (for example, through more sophisticated analytical methods) for statistical organizations….(More)”.

Our laws don’t do enough to protect our health data


 at the Conversation: “A particularly sensitive type of big data is medical big data. Medical big data can consist of electronic health records, insurance claims, information entered by patients into websites such as PatientsLikeMeand more. Health information can even be gleaned from web searches, Facebook and your recent purchases.

Such data can be used for beneficial purposes by medical researchers, public health authorities, and healthcare administrators. For example, they can use it to study medical treatments, combat epidemics and reduce costs. But others who can obtain medical big data may have more selfish agendas.

I am a professor of law and bioethics who has researched big data extensively. Last year, I published a book entitled Electronic Health Records and Medical Big Data: Law and Policy.

I have become increasingly concerned about how medical big data might be used and who could use it. Our laws currently don’t do enough to prevent harm associated with big data.

What your data says about you

Personal health information could be of interest to many, including employers, financial institutions, marketers and educational institutions. Such entities may wish to exploit it for decision-making purposes.

For example, employers presumably prefer healthy employees who are productive, take few sick days and have low medical costs. However, there are laws that prohibit employers from discriminating against workers because of their health conditions. These laws are the Americans with Disabilities Act (ADA) and the Genetic Information Nondiscrimination Act. So, employers are not permitted to reject qualified applicants simply because they have diabetes, depression or a genetic abnormality.

However, the same is not true for most predictive information regarding possible future ailments. Nothing prevents employers from rejecting or firing healthy workers out of the concern that they will later develop an impairment or disability, unless that concern is based on genetic information.

What non-genetic data can provide evidence regarding future health problems? Smoking status, eating preferences, exercise habits, weight and exposure to toxins are all informative. Scientists believe that biomarkers in your blood and other health details can predict cognitive decline, depression and diabetes.

Even bicycle purchases, credit scores and voting in midterm elections can be indicators of your health status.

Gathering data

How might employers obtain predictive data? An easy source is social media, where many individuals publicly post very private information. Through social media, your employer might learn that you smoke, hate to exercise or have high cholesterol.

Another potential source is wellness programs. These programs seek to improve workers’ health through incentives to exercise, stop smoking, manage diabetes, obtain health screenings and so on. While many wellness programs are run by third party vendors that promise confidentiality, that is not always the case.

In addition, employers may be able to purchase information from data brokers that collect, compile and sell personal information. Data brokers mine sources such as social media, personal websites, U.S. Census records, state hospital records, retailers’ purchasing records, real property records, insurance claims and more. Two well-known data brokers are Spokeo and Acxiom.

Some of the data employers can obtain identify individuals by name. But even information that does not provide obvious identifying details can be valuable. Wellness program vendors, for example, might provide employers with summary data about their workforce but strip away particulars such as names and birthdates. Nevertheless, de-identified information can sometimes be re-identified by experts. Data miners can match information to data that is publicly available….(More)”.

Our Gutenberg Moment: It’s Time To Grapple With The Internet’s Effect On Democracy


Alberto Ibargüen at HuffPost: “When clashes wracked Charlottesville, many Americans saw neo-nazi demonstrators as the obvious instigators. But others focused on counter-demonstrators, a view amplified by the president blaming “many sides.” The rift in perception underscored an uncomfortable but unavoidable truth about the flow of information today: Americans no longer have a shared foundation of facts upon which we can agree.

Politics has long been a messy, divisive business. I lived through the 1960s, a period of similar dissatisfaction, disillusionment, and disunity, brilliantly chronicled by Ken Burns’ new film “The Vietnam War” on PBS. But common, local knowledge —of history and current events — has always been the great equalizer in American society. Today, however, a decrease in shared knowledge has led to a collapse in trust. Over the past few years, we have watched our capacity to compromise wane as not only our politics, but also our most basic value systems, have become polarized.

The key difference between then and now is how news is delivered and consumed. At the beginning of our Republic, the reach of media was local and largely verifiable. That direct relationship between media outlets and their communities — local newspapers and, later, radio and TV stations — held until the second half of the 20th century. Network TV began to create a sense of national community but it fractioned with the sudden ability to offer targeted, membership-based models via cable.

But cable was nothing compared to Internet. Internet’s unique ability to personalize and to create virtual communities of interest accelerated the decline of newspapers and television business models and altered the flow of information in ways that we are still uncovering. “Media” now means digital and cable, cool mediums that require hot performance. Trust in all media, including traditional media, is at an all-time low, and we’re just now beginning to grapple with the threat to democracy posed by this erosion of trust.

Internet is potentially the greatest democratizing tool in history. It is also democracy’s greatest challenge. In offering access to information that can support any position and confirm any bias, social media has propelled the erosion of our common set of everyday facts….(More)”.

Political Ideology and Municipal Size as Incentives for the Implementation and Governance Models of Web 2.0 in Providing Public Services


Manuel Pedro Rodríguez Bolívar and Laura Alcaide Muñoz in the International Journal of Public Administration in the Digital Age: “The growing participation in social networking sites is altering the nature of social relations and changing the nature of political and public dialogue. This paper aims to contribute to the current debate on Web 2.0 technologies and their implications for local governance, through the identification of the perceptions of policy makers in local governments on the use of Web 2.0 in providing public services (reasons, advantages and risks) and on the change of the roles that these technologies could provoke in interactions between local governments and their stakeholders (governance models). This paper also analyzes whether the municipal size is a main factor that could influence on the policy makers’ perceptions regarding these main topics. Findings suggest that policy makers are willing to implement Web 2.0 technologies in providing public services, but preferably under the Bureaucratic model framework, thus retaining a leading role in this implementation. The municipal size is a factor that could influence on policy makers’ perceptions….(More)”.

Fraud Data Analytics Tools and Techniques in Big Data Era


Paper by Sara Makki et al: “Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach….(More)”.

How to Use Social Media to Better Engage People Affected by Crises


Guide by the International Red Cross Federation: “Together with ICRC, and with the support of OCHA, we have published a brief guide on how to use social media to better engage people affected by crisis. The guide is geared towards staff in humanitarian organisations who are responsible for official social media channels.

In the past few years, the role of social media and digital technologies in times of disasters and crises has grown exponentially. During disasters like the 2015 Nepal earthquake, for instance, Facebook and Twitter were crucial components of the humanitarian response, allowing mostly local, but also international actors involved in relief efforts, to disseminate lifesaving messages. However, the use of social media by humanitarian organizations to engage and communicate with (not about) affected people is, to date, still vastly untapped and largely under researched and document¬ed in terms of the provision of practical guidance, both thematically and technically, good practices and lessons learned.

This brief guide, trying to address this gap, provides advice on how to use social media effectively to engage with, and be accountable to, affected people through practical tips and case studies from within the Movement and the wider sector…(Guide)”.

Using Facebook data as a real-time census


Phys.org: “Determining how many people live in Seattle, perhaps of a certain age, perhaps from a specific country, is the sort of question that finds its answer in the census, a massive data dump for places across the country.

But just how fresh is that data? After all, the census is updated once a decade, and the U.S. Census Bureau’s smaller but more detailed American Community Survey, annually. There’s also a delay between when data are collected and when they are published. (The release of data for 2016 started gradually in September 2017.)

Enter Facebook, which, with some caveats, can serve as an even more current source of , especially about migrants. That’s the conclusion of a study led by Emilio Zagheni, associate professor of sociology at the University of Washington, published Oct. 11 in Population and Development Review. The study is believed to be the first to demonstrate how present-day migration statistics can be obtained by compiling the same data that advertisers use to target their audience on Facebook, and by combining that source with information from the Census Bureau.

Migration indicates a variety of political and economic trends and is a major driver of population change, Zagheni said. As researchers further explore the increasing number of databases produced for advertisers, Zagheni argues, social scientists could leverage Facebook, LinkedIn and Twitter more often to glean information on geography, mobility, behavior and employment. And while there are some limits to the data – each platform is a self-selected, self-reporting segment of the population – the number of migrants according to Facebook could supplement the official numbers logged by the U.S. Census Bureau, Zagheni said….(Full Paper).

Tech’s fight for the upper hand on open data


Rana Foroohar at the Financial Times: “One thing that’s becoming very clear to me as I report on the digital economy is that a rethink of the legal framework in which business has been conducted for many decades is going to be required. Many of the key laws that govern digital commerce (which, increasingly, is most commerce) were crafted in the 1980s or 1990s, when the internet was an entirely different place. Consider, for example, the US Computer Fraud and Abuse Act.

This 1986 law made it a federal crime to engage in “unauthorised access” to a computer connected to the internet. It was designed to prevent hackers from breaking into government or corporate systems. …While few hackers seem to have been deterred by it, the law is being used in turf battles between companies looking to monetise the most valuable commodity on the planet — your personal data. Case in point: LinkedIn vs HiQ, which may well become a groundbreaker in Silicon Valley.

LinkedIn is the dominant professional networking platform, a Facebook for corporate types. HiQ is a “data-scraping” company, one that accesses publicly available data from LinkedIn profiles and then mixes it up in its own quantitative black box to create two products — Keeper, which tells employers which of their employees are at greatest risk of being recruited away, and Skill Mapper, which provides a summary of the skills possessed by individual workers. LinkedIn allowed HiQ to do this for five years, before developing a very similar product to Skill Mapper, at which point LinkedIn sent the company a “cease and desist” letter, and threatened to invoke the CFAA if HiQ did not stop tapping its user data.

..Meanwhile, a case that might have been significant mainly to digital insiders is being given a huge publicity boost by Harvard professor Laurence Tribe, the country’s pre-eminent constitutional law scholar. He has joined the HiQ defence team because, as he told me, he believes the case is “tremendously important”, not only in terms of setting competitive rules for the digital economy, but in the realm of free speech. According to Prof Tribe, if you accept that the internet is the new town square, and “data is a central type of capital”, then it must be freely available to everyone — and LinkedIn, as a private company, cannot suddenly decide that publicly accessible, Google-searchable data is their private property….(More)”.

On the cultural ideology of Big Data


Nathan Jurgenson in The New Inquiry: “Modernity has long been obsessed with, perhaps even defined by, its epistemic insecurity, its grasping toward big truths that ultimately disappoint as our world grows only less knowable. New knowledge and new ways of understanding simultaneously produce new forms of nonknowledge, new uncertainties and mysteries. The scientific method, based in deduction and falsifiability, is better at proliferating questions than it is at answering them. For instance, Einstein’s theories about the curvature of space and motion at the quantum level provide new knowledge and generates new unknowns that previously could not be pondered.

Since every theory destabilizes as much as it solidifies in our view of the world, the collective frenzy to generate knowledge creates at the same time a mounting sense of futility, a tension looking for catharsis — a moment in which we could feel, if only for an instant, that we know something for sure. In contemporary culture, Big Data promises this relief.

As the name suggests, Big Data is about size. Many proponents of Big Data claim that massive databases can reveal a whole new set of truths because of the unprecedented quantity of information they contain. But the big in Big Data is also used to denote a qualitative difference — that aggregating a certain amount of information makes data pass over into Big Data, a “revolution in knowledge,” to use a phrase thrown around by startups and mass-market social-science books. Operating beyond normal science’s simple accumulation of more information, Big Data is touted as a different sort of knowledge altogether, an Enlightenment for social life reckoned at the scale of masses.

As with the similarly inferential sciences like evolutionary psychology and pop-neuroscience, Big Data can be used to give any chosen hypothesis a veneer of science and the unearned authority of numbers. The data is big enough to entertain any story. Big Data has thus spawned an entire industry (“predictive analytics”) as well as reams of academic, corporate, and governmental research; it has also sparked the rise of “data journalism” like that of FiveThirtyEight, Vox, and the other multiplying explainer sites. It has shifted the center of gravity in these fields not merely because of its grand epistemological claims but also because it’s well-financed. Twitter, for example recently announced that it is putting $10 million into a “social machines” Big Data laboratory.

The rationalist fantasy that enough data can be collected with the “right” methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular. The term comes from Positive Philosophy (1830-1842), by August Comte, who also coined the term sociology in this image. As Western sociology began to congeal as a discipline (departments, paid jobs, journals, conferences), Emile Durkheim, another of the field’s founders, believed it could function as a “social physics” capable of outlining “social facts” akin to the measurable facts that could be recorded about the physical properties of objects. It’s an arrogant view, in retrospect — one that aims for a grand, general theory that can explain social life, a view that became increasingly rooted as sociology became focused on empirical data collection.

A century later, that unwieldy aspiration has been largely abandoned by sociologists in favor of reorienting the discipline toward recognizing complexities rather than pursuing universal explanations for human sociality. But the advent of Big Data has resurrected the fantasy of a social physics, promising a new data-driven technique for ratifying social facts with sheer algorithmic processing power…(More)”