Sentiment Analysis of Big Data: Methods, Applications, and Open Challenges


Paper by Shahid Shayaa et al at IEEE: “The development of IoT technologies and the massive admiration and acceptance of social media tools and applications, new doors of opportunity have been opened for using data analytics in gaining meaningful insights from unstructured information. The application of opinion mining and sentiment analysis (OMSA) in the era of big data have been used a useful way in categorize the opinion into different sentiment and in general evaluating the mood of the public. Moreover, different techniques of OMSA have been developed over the years in different datasets and applied to various experimental settings. In this regard, this study presents a comprehensive systematic literature review, aims to discuss both technical aspect of OMSA (techniques, types) and non-technical aspect in the form of application areas are discussed. Furthermore, the study also highlighted both technical aspect of OMSA in the form of challenges in the development of its technique and non-technical challenges mainly based on its application. These challenges are presented as a future direction for research….(More)”.

Social, Mobile, and Emerging Media around the World


Book edited by Alexander V. Laskin: “…edited collection of cutting edge research on the practical applications of diverse types of emerging media technologies in a variety of industries and in many different regions of the world. In recent years, emergent social media have initiated a revolution comparable in impact to the industrial revolution or the invention of the Internet. Today, social media’s usage statistics are mind-boggling: almost two billion people are Facebook users, over one billion people communicate via What’sApp, over forty billion pictures are posted on Instagram, and over one million snaps are sent on Snapchat daily. This edited collection analyzes the influence of emerging media technologies on governments, global organizations, non-profits, corporations, museums, restaurants, first responders, sports, medicine, television, and free speech. It studies such new media phenomena as brandjacking, crowd-funding, crowd-mapping, augmented reality, mHealth, and transmedia, focusing specifically on new media platforms like Facebook and Facebook Live, Twitter, Sina Weibo, Yelp, and other mobile apps….(More)”.

What Democracy Needs Now


The RSA Chief Executive’s Lecture 2018 by Matthew Taylor: “In 1989 with the fall of the Berlin Wall still echoing, Francis Fukuyama prophesied the global triumph of liberal democracy and the end of history. Thirty years on it is not history in jeopardy but liberal democracy itself.

China – the rising global power – is thriving with a system which combines economic freedom with political autocracy. There is the growth of what Yascha Mounk calls illiberal democracies – countries with notionally free elections but without the liberal foundations of accountability, civil liberties and cultural openness. The issue with nations like Russia, Hungary and Turkey, and with those exhibiting a backlash against liberalism like America and Italy, is not just how they operate but the tendency for populism – when given the excuse or opportunity – to drift towards authoritarianism.

While the alternatives to the liberal democratic system grow more confident the citizens living in those systems become more restless. Politicians and political institutions in countries are viewed with dismay and contempt. We don’t like them, we don’t trust them, we don’t think they can solve the problems that most matter to us. The evidence, particularly from the US, is starting to suggest that disillusionment with politics is now becoming indifference towards democracy itself.

Will liberal democracy come back into fashion – is this a cycle or is it a trend? Behind the global patterns each country is different, but think of what is driving anger and disillusionment in our own.

Living standards flat-lining for longer than at any time since the industrial revolution. A decade of austerity leaving our public services threadbare and in a mode of continual crisis management. From social care to gangs, from cybercrime to mental health, how many of us think Government is facing up to the problems let alone developing solutions?

Inequality, having risen precipitously in the 1980s, remains stubbornly high, fuelling anger about elites and making not just the economic divide but all divisions worse.

Social media – where increasingly people get their information and engage in political discourse – has the seemingly in-built tendency to confirm prejudice and polarise opinion.

The great intertwined forces shaping the future – globalisation, unprecedented corporate power, technological change – continue to reinforce a sense in people, places and nations that they have no agency. Yet the hunger to take back control which started as tragedy is rapidly becoming a farce.

If this is the warm climate in which disillusionment has taken root and grown it shows few signs of cooling.

For all its many failings, I have always believed that over the long term liberal democracy would carry on making lives better for most people most of the time. As a progressive my guiding star is what Roberto Unger has called ‘the larger life for all’. But for the first time, I view the future with more fear than hope.

There are those who disparage pessimism. To them the backlash against liberalism, the signs of a declining faith in democracy, are passing responses to failure and misfortune. Populism will give the system the wake-up call it needs. In time a new generation of leaders will renew the system. Populism need neither be extreme nor beget authoritarianism – look at Macron.

This underestimates the dangers that face us. It is too reminiscent of those who believed, until the results came in, that the British people would not take the risk of Brexit or that the Americans would reject the madness of Trump. It underestimates too how the turn against liberal democracy in one country can beget it in another. Paradoxically, today nationalists seem more able to collaborate with each other than countries ostensibly committed to internationalism. Chaos spreads more quickly than order. Global treaties and institutions take years to agree, they can breakdown overnight.

Of course, liberal democracy has failed over and again to live up to its own promise. But the fact that things need to change doesn’t mean they can’t get a whole lot worse.

We are also in danger of underestimating the coherence and confidence of liberalism’s critics. Last month Hungarian Prime Minister Victor Orban made a powerful speech defending his brand of nationalist populism and boasting of his growing alliances across Europe. He appealed to the continent’s centre-right to recognise that it has more in common with conservative nationalism than the EU’s liberal establishment. There are aspects of Orban’s analysis which have an understandable appeal to the mainstream, but remember this is also a man who is unashamedly hostile to Islam, contemptuous of humanitarianism, and who is playing fast and loose with democratic safeguards in his own country.

We may disagree about how malign or dangerous are figures like Orban or Erdogan, or Trump or Salvini, but surely we can agree that those who want to defend the open, pluralistic, inclusive values of liberal democracy must try to make a better case for what we believe?

In part this involves defending the record of liberal societies in improving lives, creating opportunities and keeping the peace, at least between themselves. But it also means facing up to what is going wrong and what must change.

Complex problems are rarely addressed with a single solution. To ever again achieve the remarkable and unprecedented economic and social advances of the three decades after the Second World War, liberal democracy needs profound renewal. But change must start some place. This evening I want to argue that place should be the way we do democracy itself…(More) (Video)”.

Migration Data using Social Media


European Commission JRC Technical Report: “Migration is a top political priority for the European Union (EU). Data on international migrant stocks and flows are essential for effective migration management. In this report, we estimated the number of expatriates in 17 EU countries based on the number of Facebook Network users who are classified by Facebook as “expats”. To this end, we proposed a method for correcting the over- or under-representativeness of Facebook Network users compared to countries’ actual population.

This method uses Facebook penetration rates by age group and gender in the country of previous residence and country of destination of a Facebook expat. The purpose of Facebook Network expat estimations is not to reproduce migration statistics, but rather to generate separate estimates of expatriates, since migration statistics and Facebook Network expats estimates do not measure the same quantities of interest.

Estimates of social media application users who are classified as expats can be a timely, low-cost, and almost globally available source of information for estimating stocks of international migrants. Our methodology allowed for the timely capture of the increase of Venezuelan migrants in Spain. However, there are important methodological and data integrity issues with using social media data sources for studying migration-related phenomena. For example, our methodology led us to significantly overestimate the number of expats from Philippines in Spain and in Italy and there is no evidence that this overestimation may be valid. While research on the use of big data sources for migration is in its infancy, and the diffusion of internet technologies in less developed countries is still limited, the use of big data sources can unveil useful insights on quantitative and qualitative characteristics of migration….(More)”.

Google.gov


Adam J. White at New Atlantis: “Google exists to answer our small questions. But how will we answer larger questions about Google itself? Is it a monopoly? Does it exert too much power over our lives? Should the government regulate it as a public utility — or even break it up?

In recent months, public concerns about Google have become more pronounced. This February, the New York Times Magazine published “The Case Against Google,” a blistering account of how “the search giant is squelching competition before it begins.” The Wall Street Journal published a similar article in January on the “antitrust case” against Google, along with Facebook and Amazon, whose market shares it compared to Standard Oil and AT&T at their peaks. Here and elsewhere, a wide array of reporters and commentators have reflected on Google’s immense power — not only over its competitors, but over each of us and the information we access — and suggested that the traditional antitrust remedies of regulation or breakup may be necessary to rein Google in.

Dreams of war between Google and government, however, obscure a much different relationship that may emerge between them — particularly between Google and progressive government. For eight years, Google and the Obama administration forged a uniquely close relationship. Their special bond is best ascribed not to the revolving door, although hundreds of meetings were held between the two; nor to crony capitalism, although hundreds of people have switched jobs from Google to the Obama administration or vice versa; nor to lobbying prowess, although Google is one of the top corporate lobbyists.

Rather, the ultimate source of the special bond between Google and the Obama White House — and modern progressive government more broadly — has been their common ethos. Both view society’s challenges today as social-engineering problems, whose resolutions depend mainly on facts and objective reasoning. Both view information as being at once ruthlessly value-free and yet, when properly grasped, a powerful force for ideological and social reform. And so both aspire to reshape Americans’ informational context, ensuring that we make choices based only upon what they consider the right kinds of facts — while denying that there would be any values or politics embedded in the effort.

Addressing an M.I.T. sports-analytics conference in February, former President Obama said that Google, Facebook, and prominent Internet services are “not just an invisible platform, but they are shaping our culture in powerful ways.” Focusing specifically on recent outcries over “fake news,” he warned that if Google and other platforms enable every American to personalize his or her own news sources, it is “very difficult to figure out how democracy works over the long term.” But instead of treating these tech companies as public threats to be regulated or broken up, Obama offered a much more conciliatory resolution, calling for them to be treated as public goods:

I do think that the large platforms — Google and Facebook being the most obvious, but Twitter and others as well that are part of that ecosystem — have to have a conversation about their business model that recognizes they are a public good as well as a commercial enterprise.

This approach, if Google were to accept it, could be immensely consequential….(More)”.

Ways to think about machine learning


Benedict Evans: “We’re now four or five years into the current explosion of machine learning, and pretty much everyone has heard of it. It’s not just that startups are forming every day or that the big tech platform companies are rebuilding themselves around it – everyone outside tech has read the Economist or BusinessWeek cover story, and many big companies have some projects underway. We know this is a Next Big Thing.

Going a step further, we mostly understand what neural networks might be, in theory, and we get that this might be about patterns and data. Machine learning lets us find patterns or structures in data that are implicit and probabilistic (hence ‘inferred’) rather than explicit, that previously only people and not computers could find. They address a class of questions that were previously ‘hard for computers and easy for people’, or, perhaps more usefully, ‘hard for people to describe to computers’. And we’ve seen some cool (or worrying, depending on your perspective) speech and vision demos.

I don’t think, though, that we yet have a settled sense of quite what machine learning means – what it will mean for tech companies or for companies in the broader economy, how to think structurally about what new things it could enable, or what machine learning means for all the rest of us, and what important problems it might actually be able to solve.

This isn’t helped by the term ‘artificial intelligence’, which tends to end any conversation as soon as it’s begun. As soon as we say ‘AI’, it’s as though the black monolith from the beginning of 2001 has appeared, and we all become apes screaming at it and shaking our fists. You can’t analyze ‘AI’.

Indeed, I think one could propose a whole list of unhelpful ways of talking about current developments in machine learning. For example:

  • Data is the new oil
  • Google and China (or Facebook, or Amazon, or BAT) have all the data
  • AI will take all the jobs
  • And, of course, saying AI itself.

More useful things to talk about, perhaps, might be:

  • Automation
  • Enabling technology layers
  • Relational databases. …(More).

Blockchain Ethical Design Framework


Report by Cara LaPointe and Lara Fishbane: “There are dramatic predictions about the potential of blockchain to “revolutionize” everything from worldwide financial markets and the distribution of humanitarian assistance to the very way that we outright recognize human identity for billions of people around the globe. Some dismiss these claims as excessive technology hype by citing flaws in the technology or robustness of incumbent solutions and infrastructure.

The reality will likely fall somewhere between these two extremes across multiple sectors. Where initial applications of blockchain were focused on the financial industry, current applications have rapidly expanded to address a wide array of sectors with major implications for social impact.

This paper aims to demonstrate the capacity of blockchain to create scalable social impact and to identify the elements that need to be addressed to mitigate challenges in its application. We are at a moment when technology is enabling society to experiment with new solutions and business models. Ubiquity and global reach, increased capabilities, and affordability have made technology a critical tool for solving problems, making this an exciting time to think about achieving greater social impact. We can address issues for underserved or marginalized people in ways that were previously unimaginable.

Blockchain is a technology that holds real promise for dealing with key inefficiencies and transforming operations in the social sector and for improving lives. Because of its immutability and decentralization, blockchain has the potential to create transparency, provide distributed verification, and build trust across multiple systems. For instance, blockchain applications could provide the means for establishing identities for individuals without identification papers, improving access to finance and banking services for underserved populations, and distributing aid to refugees in a more transparent and efficient manner. Similarly, national and subnational governments are putting land registry information onto blockchains to create greater transparency and avoid corruption and manipulation by third parties.

From increasing access to capital, to tracking health and education data across multiple generations, to improving voter records and voting systems, blockchain has countless potential applications for social impact. As developers take on building these types of solutions, the social effects of blockchain can be powerful and lasting. With the potential for such a powerful impact, the design, application, and approach to the development and implementation of blockchain technologies have long-term implications for society and individuals.

This paper outlines why intentionality of design, which is important with any technology, is particularly crucial with blockchain, and offers a framework to guide policymakers and social impact organizations. As social media, cryptocurrencies, and algorithms have shown, technology is not neutral. Values are embedded in the code. How the problem is defined and by whom, who is building the solution, how it gets programmed and implemented, who has access, and what rules are created have consequences, in intentional and unintentional ways. In the applications and implementation of blockchain, it is critical to understand that seemingly innocuous design choices have resounding ethical implications on people’s lives.

This white paper addresses why intentionality of design matters, identifies the key questions that should be asked, and provides a framework to approach use of blockchain, especially as it relates to social impact. It examines the key attributes of blockchain, its broad applicability as well as its particular potential for social impact, and the challenges in fully realizing that potential. Social impact organizations and policymakers have an obligation to understand the ethical approaches used in designing blockchain technology, especially how they affect marginalized and vulnerable populations….(More)”

Organization after Social Media


Open access book by Geert Lovink and Ned Rossiter :”Organized networks are an alternative to the social media logic of weak links and their secretive economy of data mining. They put an end to freestyle friends, seeking forms of empowerment beyond the brief moment of joyful networking. This speculative manual calls for nothing less than social technologies based on enduring time. Analyzing contemporary practices of organization through networks as new institutional forms, organized networks provide an alternative to political parties, trade unions, NGOs, and traditional social movements. Dominant social media deliver remarkably little to advance decision-making within digital communication infrastructures. The world cries for action, not likes.

Organization after Social Media explores a range of social settings from arts and design, cultural politics, visual culture and creative industries, disorientated education and the crisis of pedagogy to media theory and activism. Lovink and Rossiter devise strategies of commitment to help claw ourselves out of the toxic morass of platform suffocation….(More)”.

Balancing Act: Innovation vs. Privacy in the Age of Data Portability


Thursday, July 12, 2018 @ 2 MetroTech Center, Brooklyn, NY 11201

RSVP here.

The ability of people to move or copy data about themselves from one service to another — data portability — has been hailed as a way of increasing competition and driving innovation. In many areas, such as through the Open Banking initiative in the United Kingdom, the practice of data portability is fully underway and propagating. The launch of GDPR in Europe has also elevated the issue among companies and individuals alike. But recent online security breaches and other experiences of personal data being transferred surreptitiously from private companies, (e.g., Cambridge Analytica’s appropriation of Facebook data), highlight how data portability can also undermine people’s privacy.

The GovLab at the NYU Tandon School of Engineering is pleased to present Jeni Tennison, CEO of the Open Data Institute, for its next Ideas Lunch, where she will discuss how data portability has been regulated in the UK and Europe, and what governments, businesses and people need to do to strike the balance between its risks and benefits.

Jeni Tennison is the CEO of the Open Data Institute. She gained her PhD from the University of Nottingham then worked as an independent consultant, specialising in open data publishing and consumption, before joining the ODI in 2012. Jeni was awarded an OBE for services to technology and open data in the 2014 New Year Honours.

Before joining the ODI, Jeni was the technical architect and lead developer for legislation.gov.uk. She worked on the early linked data work on data.gov.uk, including helping to engineer new standards for publishing statistics as linked data. She continues her work within the UK’s public sector as a member of the Open Standards Board.

Jeni also works on international web standards. She was appointed to serve on the W3C’s Technical Architecture Group from 2011 to 2015 and in 2014 she started to co-chair the W3C’s CSV on the Web Working Group. She also sits on the Advisory Boards for Open Contracting Partnership and the Data Transparency Lab.

Twitter handle: @JeniT

We Need to Save Ignorance From AI


Christina Leuker and Wouter van den Bos in Nautilus:  “After the fall of the Berlin Wall, East German citizens were offered the chance to read the files kept on them by the Stasi, the much-feared Communist-era secret police service. To date, it is estimated that only 10 percent have taken the opportunity.

In 2007, James Watson, the co-discoverer of the structure of DNA, asked that he not be given any information about his APOE gene, one allele of which is a known risk factor for Alzheimer’s disease.

Most people tell pollsters that, given the choice, they would prefer not to know the date of their own death—or even the future dates of happy events.

Each of these is an example of willful ignorance. Socrates may have made the case that the unexamined life is not worth living, and Hobbes may have argued that curiosity is mankind’s primary passion, but many of our oldest stories actually describe the dangers of knowing too much. From Adam and Eve and the tree of knowledge to Prometheus stealing the secret of fire, they teach us that real-life decisions need to strike a delicate balance between choosing to know, and choosing not to.

But what if a technology came along that shifted this balance unpredictably, complicating how we make decisions about when to remain ignorant? That technology is here: It’s called artificial intelligence.

AI can find patterns and make inferences using relatively little data. Only a handful of Facebook likes are necessary to predict your personality, race, and gender, for example. Another computer algorithm claims it can distinguish between homosexual and heterosexual men with 81 percent accuracy, and homosexual and heterosexual women with 71 percent accuracy, based on their picture alone. An algorithm named COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) can predict criminal recidivism from data like juvenile arrests, criminal records in the family, education, social isolation, and leisure activities with 65 percent accuracy….

Recently, though, the psychologist Ralph Hertwig and legal scholar Christoph Engel have published an extensive taxonomy of motives for deliberate ignorance. They identified two sets of motives, in particular, that have a particular relevance to the need for ignorance in the face of AI.

The first set of motives revolves around impartiality and fairness. Simply put, knowledge can sometimes corrupt judgment, and we often choose to remain deliberately ignorant in response. For example, peer reviews of academic papers are usually anonymous. Insurance companies in most countries are not permitted to know all the details of their client’s health before they enroll; they only know general risk factors. This type of consideration is particularly relevant to AI, because AI can produce highly prejudicial information….(More)”.