Can social media, loud and inclusive, fix world politics


 at the Conversation: “Privacy is no longer a social norm, said Facebook founder Mark Zuckerberg in 2010, as social media took a leap to bring more private information into the public domain.

But what does it mean for governments, citizens and the exercise of democracy? Donald Trump is clearly not the first leader to use his Twitter account as a way to both proclaim his policies and influence the political climate. Social media presents novel challenges to strategic policy, and has become a managerial issues for many governments.

But it also offers a free platform for public participation in government affairs. Many argue that the rise of social media technologies can give citizens and observers a better opportunity to identify pitfalls of government and their politics.

As government embrace the role of social media and the influence of negative or positive feedback on the success of their project, they are also using this tool to their advantages by spreading fabricated news.

This much freedom of expression and opinion can be a double-edged sword.

A tool that triggers change

On the positive side, social media include social networking applications such as Facebook and Google+, microblogging services such as Twitter, blogs, video blogs (vlogs), wikis, and media-sharing sites such as YouTube and Flickr, among others.

Social media as a collaborative and participatory tool, connects users with each other and help shaping various communities. Playing a key role in delivering public service value to citizens it also helps people to engage in politics and policy-making, making processes easier to understand, through information and communication technologies (ICTs).

Today four out of five countries in the world have social media features on their national portals to promote interactive networking and communication with the citizen. Although we don’t have any information about the effectiveness of such tools or whether they are used to their full potential, 20% of these countries shows that they have “resulted in new policy decisions, regulation or service”.

Social media can be an effective tool to trigger changes in government policies and services if well used. It can be used to prevent corruption, as it is direct method of reaching citizens. In developing countries, corruption is often linked to governmental services that lack automated processes or transparency in payments.

The UK is taking the lead on this issue. Its anti-corruption innovation hub aims to connect several stakeholders – including civil society, law enforcement and technologies experts – to engage their efforts toward a more transparent society.

With social media, governments can improve and change the way they communicate with their citizens – and even question government projects and policies. In Kazakhstan, for example, a migration-related legislative amendment entered into force early January 2017 and compelled property owners to register people residing in their homes immediately or else face a penalty charge starting in February 2017.

Citizens were unprepared for this requirement, and many responded with indignation on social media. At first the government ignored this reaction. However, as the growing anger soared via social media, the government took action and introduced a new service to facilitate the registration of temporary citizens….

But the campaigns that result do not always evolve into positive change.

Egypt and Libya are still facing several major crises over the last years, along with political instability and domestic terrorism. The social media influence that triggered the Arab Spring did not permit these political systems to turn from autocracy to democracy.

Brazil exemplifies a government’s failure to react properly to a massive social media outburst. In June 2013 people took to the streets to protest the rising fares of public transportation. Citizens channelled their anger and outrage through social media to mobilise networks and generate support.

The Brazilian government didn’t understand that “the message is the people”. Though the riots some called the “Tropical Spring” disappeared rather abruptly in the months to come, they had major and devastating impact on Brazil’s political power, culminating in the impeachment of President Rousseff in late 2016 and the worst recession in Brazil’s history.

As in the Arab Spring countries, the use of social media in Brazil did not result in economic improvement. The country has tumbled down into depression, and unemployment has risen to 12.6%…..

Government typically asks “how can we adapt social media to the way in which we do e-services, and then try to shape their policies accordingly. They would be wiser to ask, “how can social media enable us to do things differently in a way they’ve never been done before?” – that is, policy-making in collaboration with people….(More)”.

The Conversation

The Problem With Facts


Tim Hartford: “…In 1995, Robert Proctor, a historian at Stanford University who has studied the tobacco case closely, coined the word “agnotology”. This is the study of how ignorance is deliberately produced; the entire field was started by Proctor’s observation of the tobacco industry. The facts about smoking — indisputable facts, from unquestionable sources — did not carry the day. The indisputable facts were disputed. The unquestionable sources were questioned. Facts, it turns out, are important, but facts are not enough to win this kind of argument.

Agnotology has never been more important. “We live in a golden age of ignorance,” says Proctor today. “And Trump and Brexit are part of that.”

In the UK’s EU referendum, the Leave side pushed the false claim that the UK sent £350m a week to the EU. It is hard to think of a previous example in modern western politics of a campaign leading with a transparent untruth, maintaining it when refuted by independent experts, and going on to triumph anyway. That performance was soon to be eclipsed by Donald Trump, who offered wave upon shameless wave of demonstrable falsehood, only to be rewarded with the presidency. The Oxford Dictionaries declared “post-truth” the word of 2016. Facts just didn’t seem to matter any more.

The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts. Fact-checking organisations, such as Full Fact in the UK and PolitiFact in the US, evaluate prominent claims by politicians and journalists. I should confess a personal bias: I have served as a fact checker myself on the BBC radio programme More or Less, and I often rely on fact-checking websites. They judge what’s true rather than faithfully reporting both sides as a traditional journalist would. Public, transparent fact checking has become such a feature of today’s political reporting that it’s easy to forget it’s barely a decade old.

Mainstream journalists, too, are starting to embrace the idea that lies or errors should be prominently identified. Consider a story on the NPR website about Donald Trump’s speech to the CIA in January: “He falsely denied that he had ever criticised the agency, falsely inflated the crowd size at his inauguration on Friday . . . —” It’s a bracing departure from the norms of American journalism, but then President Trump has been a bracing departure from the norms of American politics.

Facebook has also drafted in the fact checkers, announcing a crackdown on the “fake news” stories that had become prominent on the network after the election. Facebook now allows users to report hoaxes. The site will send questionable headlines to independent fact checkers, flag discredited stories as “disputed”, and perhaps downgrade them in the algorithm that decides what each user sees when visiting the site.

We need some agreement about facts or the situation is hopeless. And yet: will this sudden focus on facts actually lead to a more informed electorate, better decisions, a renewed respect for the truth? The history of tobacco suggests not. The link between cigarettes and cancer was supported by the world’s leading medical scientists and, in 1964, the US surgeon general himself. The story was covered by well-trained journalists committed to the values of objectivity. Yet the tobacco lobbyists ran rings round them.

In the 1950s and 1960s, journalists had an excuse for their stumbles: the tobacco industry’s tactics were clever, complex and new. First, the industry appeared to engage, promising high-quality research into the issue. The public were assured that the best people were on the case. The second stage was to complicate the question and sow doubt: lung cancer might have any number of causes, after all. And wasn’t lung cancer, not cigarettes, what really mattered? Stage three was to undermine serious research and expertise. Autopsy reports would be dismissed as anecdotal, epidemiological work as merely statistical, and animal studies as irrelevant. Finally came normalisation: the industry would point out that the tobacco-cancer story was stale news. Couldn’t journalists find something new and interesting to say?

Such tactics are now well documented — and researchers have carefully examined the psychological tendencies they exploited. So we should be able to spot their re-emergence on the political battlefield.

“It’s as if the president’s team were using the tobacco industry’s playbook,” says Jon Christensen, a journalist turned professor at the University of California, Los Angeles, who wrote a notable study in 2008 of the way the tobacco industry tugged on the strings of journalistic tradition.

One infamous internal memo from the Brown & Williamson tobacco company, typed up in the summer of 1969, sets out the thinking very clearly: “Doubt is our product.” Why? Because doubt “is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy.” Big Tobacco’s mantra: keep the controversy alive.

Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again.

Tempting as it is to fight lies with facts, there are three problems with that strategy….(More)”

iGod


Novel by Willemijn Dicke and Dirk Helbing: “iGod is a science fiction novel with heroes, love, defeat and hope. But it is much more than that. This book aims to explore how societies may develop, given the technologies that we see at present. As Dirk Helbing describes it in his introduction:

We have come to the conclusion that neither a scientific study nor an investigative report would allow one to talk about certain things that, we believe, need to be thought and talked about. So, a science fiction story appeared to be the right approach. It seems the perfect way to think “what if scenarios” through. It is not the first time that this avenue has been taken. George Orwell’s “1984” and “Animal Farm” come to mind, or Dave Eggers “The Circle”. The film ‘The Matrix’ and the Netflix series ‘Black Mirror are good examples too.

“iGod” outlines how life could be in a couple of years from now, certainly in our lifetime. At some places, this story about our future society seems far-fetched. For example, in “iGod”, all citizens have a Social Citizen Score. This score is established based on their buying habits, their communication in social media and social contacts they maintain. It is obtained by mass-surveillance and has a major impact on everyone’s life. It determines whether you are entitled to get a loan, what jobs you are offered, and even how long you will receive medical care.

The book is set in the near future in Amsterdam, the Netherlands. Lex is an unemployed biologist. One day he is contacted by a computer which, gradually reveals the machinery behind the reality we see. It is a bleak world. Together with his girlfriend Diana and Seldon, a Professor at Amsterdam Tech, he starts the quest to regain freedom….(More) (Individual chapters)”

Does digital democracy improve democracy?


Thamy Pogrebinschi at Open Democracy: “The advancement of tools of information and communications technology (ICT) has the potential to impact democracy nearly as much as any other area, such as science or education. The effects of the digital world on politics and society are still difficult to measure, and the speed with which these new technological tools evolve is often faster than a scholar’s ability to assess them, or a policymaker’s capacity to make them fit into existing institutional designs.

Since their early inception, digital tools and widespread access to the internet have been changing the traditional means of participation in politics, making them more effective. Electoral processes have become more transparent and effective in several countries where the paper ballot has been substituted for electronic voting machines. Petition-signing became a widespread and powerful tool as individual citizens no longer needed to be bothered out in the streets to sign a sheet of paper, but could instead be simultaneously reached by the millions via e-mail and have their names added to virtual petition lists in seconds. Protests and demonstrations have also been immensely revitalized in the internet era. In the last few years, social networks like Facebook and WhatsApp have proved to be a driving-force behind democratic uprisings, by mobilizing the masses, invoking large gatherings, and raising awareness, as was the case of the Arab Spring.

While traditional means of political participation can become more effective by reducing the costs of participation with the use of ICT tools, one cannot yet assure that it would become less subject to distortion and manipulation. In the most recent United States’ elections, computer scientists claimed that electronic voting machines may have been hacked, altering the results in the counties that relied on them. E-petitions can also be easily manipulated, if safe identification procedures are not put in place. And in these times of post-facts and post-truths, protests and demonstrations can result from strategic partisan manipulation of social media, leading to democratic instability as has recently occurred in Brazil. Nevertheless, the distortion and manipulation of these traditional forms of participation were also present before the rise of ICT tools, and regardless, even if the latter do not solve these preceding problems, they may manage to make political processes more effective anyway.

The game-changer for democracy, however, is not the revitalization of the traditional means of political participation like elections, petition-signing and protests through digital tools. Rather, the real change on how democracy works, governments rule, and representation is delivered comes from entirely new means of e-participation, or the so-called digital democratic innovations. While the internet may boost traditional forms of political participation by increasing the quantity of citizens engaged, democratic innovations that rely on ICT tools may change the very quality of participation, thus in the long-run changing the nature of democracy and its institutions….(More)”

Watchdog to launch inquiry into misuse of data in politics


, and Alice Gibbs in The Guardian: “The UK’s privacy watchdog is launching an inquiry into how voters’ personal data is being captured and exploited in political campaigns, cited as a key factor in both the Brexit and Trump victories last year.

The intervention by the Information Commissioner’s Office (ICO) follows revelations in last week’s Observer that a technology company part-owned by a US billionaire played a key role in the campaign to persuade Britons to vote to leave the European Union.

It comes as privacy campaigners, lawyers, politicians and technology experts express fears that electoral laws are not keeping up with the pace of technological change.

“We are conducting a wide assessment of the data-protection risks arising from the use of data analytics, including for political purposes, and will be contacting a range of organisations,” an ICO spokeswoman confirmed. “We intend to publicise our findings later this year.”

The ICO spokeswoman confirmed that it had approached Cambridge Analytica over its apparent use of data following the story in the Observer. “We have concerns about Cambridge Analytica’s reported use of personal data and we are in contact with the organisation,” she said….

In the US, companies are free to use third-party data without seeking consent. But Gavin Millar QC, of Matrix Chambers, said this was not the case in Europe. “The position in law is exactly the same as when people would go canvassing from door to door,” Millar said. “They have to say who they are, and if you don’t want to talk to them you can shut the door in their face.That’s the same principle behind the data protection act. It’s why if telephone canvassers ring you, they have to say that whole long speech. You have to identify yourself explicitly.”…

Dr Simon Moores, visiting lecturer in the applied sciences and computing department at Canterbury Christ Church University and a technology ambassador under the Blair government, said the ICO’s decision to shine a light on the use of big data in politics was timely.

“A rapid convergence in the data mining, algorithmic and granular analytics capabilities of companies like Cambridge Analytica and Facebook is creating powerful, unregulated and opaque ‘intelligence platforms’. In turn, these can have enormous influence to affect what we learn, how we feel, and how we vote. The algorithms they may produce are frequently hidden from scrutiny and we see only the results of any insights they might choose to publish.” …(More)”

The Whatsapp-inspired, Facebook-investor funded app tackling India’s doctor shortage


 at TechInAsia: “A problem beyond India’s low doctor-to-patient ratio is the distribution of those doctors. Most, particularly specialists, congregate in bigger cities and get seen by patients in the surrounding areas. Only 19 percent of specialists are available in community health centers across India, and most fall well below the country’s requirement for specialists. Community health centers are located in smaller towns and help patients in the area decide if they need to visit a larger, better-equipped city facility….

The IIT-Madras grad’s company, DocsApp, co-founded with fellow IIT-Madras alum Enbasekar D (CTO), joins startups like Practo, DocDoc, and Medinfi in helping patients find physicians. However, the app’s main focus is specialists, and it lets patients chat with doctors and get consultations.

DocsApp’s name is directly inspired by WhatsApp. As long as you have a chat screen on your phone, you can input your problems and location, find a doctor, and ask questions. A user can pay for his or her own appointment over mobile. If treatment requires a physical visit, the user’s money is refunded….

Doctor profiles include the physician’s experience, medical counsel ID, patient reviews, specialty, and languages – DocsApp covers 17 different languages. DocsApp has 1,200 doctors in 15 specialties. All doctors on the platform are verified by looking up certification, an interview, and a facilities review.

If a consultation reveals that a patient needs a prescription, the doctor can provide a digitally-signed e-prescription. DocsApp can deliver medicines within two days to any location in India, says Satish.

Once a user has access to one of the doctors, he or she can message the doctor 24/7 and get a response in 30 minutes – Satish says that the company’s average is now 18 minutes. The team of 55 is aiming for a minute or less….

Telemedicine is one of the ways tech is combatting India’s doctor shortage. Other startups in the industry in the country include Visit, which focuses on both physical and mental health, and SeeDoc, a physician video consultation app.

A chat is a little less personal than a physical visit, which can open the door for patients who want to discuss more taboo topics in India, like mental health and fertility questions. Satish adds that women who live in locations where it’s best to be accompanied by a man when going out also find convenience, as they don’t necessarily need to wait for a husband to come back from work before addressing a medical question she has about her child…(More)”.

Restoring Trust in Expertise


Minouche Shafik at Project Syndicate: “…public confidence in experts is at a crossroads. With news becoming more narrowly targeted to individual interests and preferences, and with people increasingly choosing whom to trust and follow, the traditional channels for sharing expertise are being disrupted. Who needs experts when you have Facebook, Google, Mumsnet, and Twitter?

Actually, we all do. Over the course of human history, the application of expertise has helped tackle disease, reduce poverty, and improve human welfare. If we are to build on this progress, we need reliable experts to whom the public can confidently turn.

Restoring confidence requires, first, that those describing themselves as “experts” embrace uncertainty. Rather than pretending to be certain and risk frequently getting it wrong, commentators should be candid about uncertainty. Over the long term, such an approach will rebuild credibility. A good example of this is the use of “fan charts” in forecasts produced by the Bank of England’s Monetary Policy Committee (MPC), which show the wide range of possible outcomes for issues such as inflation, growth, and unemployment.

Yet conveying uncertainty increases the complexity of a message. This is a major challenge. It is easy to tweet “BoE forecasts 2% growth.” The fan chart’s true meaning – “If economic circumstances identical to today were to prevail on 100 occasions, the MPC’s best collective judgment is that the mature estimate of GDP growth would lie above 2% on 50 occasions and below 2% on 50 occasions” – doesn’t even fit within Twitter’s 140-character limit.

This underscores the need for sound principles and trustworthy practices to become more widespread as technology changes the way we consume information. Should journalists and bloggers be exposed for reporting or recirculating falsehoods or rumors? Perhaps principles and practices widely used in academia – such as peer review, competitive processes for funding research, transparency about conflicts of interests and financing sources, and requirements to publish underlying data – should be adapted and applied more widely to the world of think tanks, websites, and the media….

Schools and universities will have to do more to educate students to be better consumers of information. Striking research by the Stanford History Education Group, based on tests of thousands of students across the US, described as “bleak” their findings about young people’s ability to evaluate information they encounter online. Fact-checking websites appraising the veracity of claims made by public figures are a step in the right direction, and have some similarities to peer review in academia.

Listening to the other side is crucial. Social media exacerbates the human tendency of groupthink by filtering out opposing views. We must therefore make an effort to engage with opinions that are different from our own and resist algorithmic channeling to avoid difference. Perhaps technology “experts” could code algorithms that burst such bubbles.

Finally, the boundary between technocracy and democracy needs to be managed more carefully. Not surprisingly, when unelected individuals steer decisions that have huge social consequences, public resentment may not be far behind. Problems often arise when experts try to be politicians or politicians try to be experts. Clarity about roles – and accountability when boundaries are breached – is essential.

We need expertise more than ever to solve the world’s problems. The question is not how to manage without experts, but how to ensure that expertise is trustworthy. Getting this right is vital: if the future is not to be shaped by ignorance and narrow-mindedness, we need knowledge and informed debate more than ever before….(More)”.

Global Patterns of Synchronization in Human Communications


Alfredo J. Morales, Vaibhav Vavilala, Rosa M. Benito, and Yaneer Bar-Yam in the Journal of the Royal Society Interface: “Social media are transforming global communication and coordination and provide unprecedented opportunities for studying socio-technical domains. Here we study global dynamical patterns of communication on Twitter across many scales. Underlying the observed patterns is both the diurnal rotation of the earth, day and night, and the synchrony required for contingency of actions between individuals. We find that urban areas show a cyclic contraction and expansion that resembles heartbeats linked to social rather than natural cycles. Different urban areas have characteristic signatures of daily collective activities. We show that the differences detected are consistent with a new emergent global synchrony that couples behavior in distant regions across the world. Although local synchrony is the major force that shapes the collective behavior in cities, a larger-scale synchronization is beginning to occur….(More)”.

When the Big Lie Meets Big Data


Peter Bruce in Scientific America: “…The science of predictive modeling has come a long way since 2004. Statisticians now build “personality” models and tie them into other predictor variables. … One such model bears the acronym “OCEAN,” standing for the personality characteristics (and their opposites) of openness, conscientiousness, extroversion, agreeableness, and neuroticism. Using Big Data at the individual level, machine learning methods might classify a person as, for example, “closed, introverted, neurotic, not agreeable, and conscientious.”

Alexander Nix, CEO of Cambridge Analytica (owned by Trump’s chief donor, Rebekah Mercer), says he has thousands of data points on you, and every other voter: what you buy or borrow, where you live, what you subscribe to, what you post on social media, etc. At a recent Concordia Summit, using the example of gun rights, Nix described how messages will be crafted to appeal specifically to you, based on your personality profile. Are you highly neurotic and conscientious? Nix suggests the image of a sinister gloved hand reaching through a broken window.

In his presentation, Nix noted that the goal is to induce behavior, not communicate ideas. So where does truth fit in? Johan Ugander, Assistant Professor of Management Science at Stanford, suggests that, for Nix and Cambridge Analytica, it doesn’t. In counseling the hypothetical owner of a private beach how to keep people off his property, Nix eschews the merely factual “Private Beach” sign, advocating instead a lie: “Sharks sighted.” Ugander, in his critique, cautions all data scientists against “building tools for unscrupulous targeting.”

The warning is needed, but may be too late. What Nix described in his presentation involved carefully crafted messages aimed at his target personalities. His messages pulled subtly on various psychological strings to manipulate us, and they obeyed no boundary of truth, but they required humans to create them.  The next phase will be the gradual replacement of human “craftsmanship” with machine learning algorithms that can supply targeted voters with a steady stream of content (from whatever source, true or false) designed to elicit desired behavior. Cognizant of the Pandora’s box that data scientists have opened, the scholarly journal Big Data has issued a call for papers for a future issue devoted to “Computational Propaganda.”…(More)”

Facebook artificial intelligence spots suicidal users


Leo Kelion at BBC News: “Facebook has begun using artificial intelligence to identify members that may be at risk of killing themselves.

The social network has developed algorithms that spot warning signs in users’ posts and the comments their friends leave in response.

After confirmation by Facebook’s human review team, the company contacts those thought to be at risk of self-harm to suggest ways they can seek help.

A suicide helpline chief said the move was “not just helpful but critical”.

The tool is being tested only in the US at present.

It marks the first use of AI technology to review messages on the network since founder Mark Zuckerberg announced last month that he also hoped to use algorithms to identify posts by terrorists, among other concerning content.

Facebook also announced new ways to tackle suicidal behaviour on its Facebook Live broadcast tool and has partnered with several US mental health organisations to let vulnerable users contact them via its Messenger platform.

Pattern recognition

Facebook has offered advice to users thought to be at risk of suicide for years, but until now it had relied on other users to bring the matter to its attention by clicking on a post’s report button.

It has now developed pattern-recognition algorithms to recognise if someone is struggling, by training them with examples of the posts that have previously been flagged.

Talk of sadness and pain, for example, would be one signal.

Responses from friends with phrases such as “Are you OK?” or “I’m worried about you,” would be another.

Once a post has been identified, it is sent for rapid review to the network’s community operations team.

“We know that speed is critical when things are urgent,” Facebook product manager Vanessa Callison-Burch told the BBC.

The director of the US National Suicide Prevention Lifeline praised the effort, but said he hoped Facebook would eventually do more than give advice, by also contacting those that could help….

The latest effort to help Facebook Live users follows the death of a 14-year-old-girl in Miami, who livestreamed her suicide on the platform in January.

However, the company said it had already begun work on its new tools before the tragedy.

The goal is to help at-risk users while they are broadcasting, rather than wait until their completed video has been reviewed some time later….(More)”.