Can Social Media Help Build Communities?


Paper by Eric Forbush and  Nicol Turner-Lee: “In June 2017, Mark Zuckerberg proclaimed a new mission for Facebook, which was to “[g]ive people the power to build community and bring the world closer together” during the company’s first Community Summit. Yet, his declaration comes in the backdrop of a politically polarized America. While research has indicated that ideological polarization (the alignment and divergence of ideologies) has remained relatively unchanged, affective polarization (the degree to which Democrats and Republicans dislike each other) has skyrocketed (Gentzkow, 2016; Lelkes, 2016). This dislike for members of the opposite party may be amplified on social media platforms.
Social media have been accused of making our social networks increasingly insular, resulting in “echo chambers,” wherein individuals select information and friends who support their already held beliefs (Quattrociocchi, Scala, and Sunstein, 2016; Williams, McMurray, Kurz, and Lambert, 2015). However, the implicit message in Zuckerberg’s comments, and other leaders in this space, is that social media can provide users with a means for brokering relationships with other users that hold different values and beliefs from them. However, little is known on the extent to which social media platforms enable these opportunities.

Theories of prejudice reduction (Paluck and Green, 2009) partially explain an idealistic outcome of improved online relationships. In his seminal contact theory, Gordon Allport (1954) argued that under certain optimal conditions, all that is needed to reduce prejudice is for members of different groups to spend more time interacting with each other. However, contemporary social media platforms may not be doing enough to increase intergroup engagements, especially between politically polarized communities on issues of importance.

In this paper, we use Twitter data collected over a 20-day period, following the Day of Action for Net Neutrality on July 12, 2017. In support of a highly polarized regulatory issue, the Day of Action was organized by advocacy groups and corporations in support of an open internet, which does not discriminate against online users when accessing their preferred content. Analyzing 81,316 tweets about #netneutrality from 40,502 distinct users, we use social network analysis to develop network visualizations and conduct discrete content analysis of central tweets. Our research also divides the content by those in support and those opposed to any type of repeal of net neutrality rules by the FCC.

Our analysis of this particular issue reveals that social media is merely replicating, and potentially strengthening polarization on issues by party affiliations and online associations. Consequently, the appearance of mediators who are able to bridge online conversations or beliefs on charged issues appear to be nonexistent on both sides of the issue. Consequently, our findings suggest that social media companies may not be doing enough to bring communities together through meaningful conversations on their platforms….(More)”.

Lessons from Cambridge Analytica: one way to protect your data


Julia Apostle in the Financial Times: “The unsettling revelations about how data firm Cambridge Analytica surreptitiously exploited the personal information of Facebook users is yet another demoralising reminder of how much data has been amassed about us, and of how little control we have over it.

Unfortunately, the General Data Protection Regulation privacy laws that are coming into force across Europe — with more demanding consent, transparency and accountability requirements, backed by huge fines — may improve practices, but they will not change the governing paradigm: the law labels those who gather our data as “controllers”. We are merely “subjects”.

But if the past 20 years have taught us anything, it is that when business and legislators have been too slow to adapt to public demand — for goods and services that we did not even know we needed, such as Amazon, Uber and bitcoin — computer scientists have stepped in to fill the void. And so it appears that the realms of data privacy and security are deserving of some disruption. This might come in the form of “self-sovereign identity” systems.

The theory behind self-sovereign identity is that individuals should control the data elements that form the basis of their digital identities, and not centralised authorities such as governments and private companies. In the current online environment, we all have multiple log-ins, usernames, customer IDs and personal data spread across countless platforms and stored in myriad repositories.

Instead of this scattered approach, we should each possess the digital equivalent of a wallet that contains verified pieces of our identities. We can then choose which identification to share, with whom, and when. Self-sovereign identity systems are currently being developed.

They involve the creation of a unique and persistent identifier attributed to an individual (called a decentralised identity), which cannot be taken away. The systems use public/private key cryptography, which enables a user with a private key (a string of numbers) to share information with unlimited recipients who can access the encrypted data if they possess a corresponding public key.

The systems also rely on decentralised ledger applications like blockchain. While key cryptography has been around for a long time, it is the development of decentralised ledger technology, which also supports the trading of cryptocurrencies without the involvement of intermediaries, that will allow self-sovereign identity systems to take off. The potential uses for decentralised identity are legion and small-scale implementation is already happening. The Swiss municipality of Zug started using a decentralised identity system called uPort last year, to allow residents access to certain government services. The municipality announced it will also use the system for voting this spring….

Decentralised identity is more difficult to access and therefore there is less financial incentive for hackers to try. Self-sovereign identity systems could eliminate many of our data privacy concerns while empowering individuals in the online world and turning the established data order on its head. But the success of the technology depends on its widespread adoption….(More)

Psychographics: the behavioural analysis that helped Cambridge Analytica know voters’ minds


Michael Wade at The Conversation: “Much of the discussion has been on how Cambridge Analytica was able to obtain data on more than 50m Facebook users – and how it allegedly failed to delete this data when told to do so. But there is also the matter of what Cambridge Analytica actually did with the data. In fact the data crunching company’s approach represents a step change in how analytics can today be used as a tool to generate insights – and to exert influence.

For example, pollsters have long used segmentation to target particular groups of voters, such as through categorising audiences by gender, age, income, education and family size. Segments can also be created around political affiliation or purchase preferences. The data analytics machine that presidential candidate Hillary Clinton used in her 2016 campaign – named Ada after the 19th-century mathematician and early computing pioneer – used state-of-the-art segmentation techniques to target groups of eligible voters in the same way that Barack Obama had done four years previously.

Cambridge Analytica was contracted to the Trump campaign and provided an entirely new weapon for the election machine. While it also used demographic segments to identify groups of voters, as Clinton’s campaign had, Cambridge Analytica also segmented using psychographics. As definitions of class, education, employment, age and so on, demographics are informational. Psychographics are behavioural – a means to segment by personality.

This makes a lot of sense. It’s obvious that two people with the same demographic profile (for example, white, middle-aged, employed, married men) can have markedly different personalities and opinions. We also know that adapting a message to a person’s personality – whether they are open, introverted, argumentative, and so on – goes a long way to help getting that message across….

There have traditionally been two routes to ascertaining someone’s personality. You can either get to know them really well – usually over an extended time. Or you can get them to take a personality test and ask them to share it with you. Neither of these methods is realistically open to pollsters. Cambridge Analytica found a third way, with the assistance of two University of Cambridge academics.

The first, Aleksandr Kogan, sold them access to 270,000 personality tests completed by Facebook users through an online app he had created for research purposes. Providing the data to Cambridge Analytica was, it seems, against Facebook’s internal code of conduct, but only now in March 2018 has Kogan been banned by Facebook from the platform. In addition, Kogan’s data also came with a bonus: he had reportedly collected Facebook data from the test-takers’ friends – and, at an average of 200 friends per person, that added up to some 50m people.

However, these 50m people had not all taken personality tests. This is where the second Cambridge academic, Michal Kosinski, came in. Kosinski – who is said to believe that micro-targeting based on online data could strengthen democracy – had figured out a way to reverse engineer a personality profile from Facebook activity such as likes. Whether you choose to like pictures of sunsets, puppies or people apparently says a lot about your personality. So much, in fact, that on the basis of 300 likes, Kosinski’s model is able to predict someone’s personality profile with the same accuracy as a spouse….(More)”

Cambridge Analytica scandal: legitimate researchers using Facebook data could be collateral damage


 at The Conversation: “The scandal that has erupted around Cambridge Analytica’s alleged harvesting of 50m Facebook profiles assembled from data provided by a UK-based academic and his company is a worrying development for legitimate researchers.

Political data analytics company Cambridge Analytica – which is affiliated with Strategic Communication Laboratories (SCL) – reportedly used Facebook data, after it was handed over by Aleksandr Kogan, a lecturer at the University of Cambridge’s department of psychology.

Kogan, through his company Global Science Research (GSR) – separate from his university work – gleaned the data from a personality test app named “thisisyourdigitallife”. Roughly 270,000 US-based Facebook users voluntarily responded to the test in 2014. But the app also collected data on those participants’ Facebook friends without their consent.

This was possible due to Facebook rules at the time that allowed third-party apps to collect data about a Facebook user’s friends. The Mark Zuckerberg-run company has since changed its policy to prevent such access to developers….

Social media data is a rich source of information for many areas of research in psychology, technology, business and humanities. Some recent examples include using Facebook to predict riots, comparing the use of Facebook with body image concern in adolescent girls and investigating whether Facebook can lower levels of stress responses, with research suggesting that it may enhance and undermine psycho-social constructs related to well-being.

It is right to believe that researchers and their employers value research integrity. But instances where trust has been betrayed by an academic – even if it’s the case that data used for university research purposes wasn’t caught in the crossfire – will have a negative impact on whether participants will continue to trust researchers. It also has implications for research governance and for companies to share data with researchers in the first place.

Universities, research organisations and funders govern the integrity of research with clear and strict ethics proceduresdesigned to protect participants in studies, such as where social media data is used. The harvesting of data without permission from users is considered an unethical activity under commonly understood research standards.

The fallout from the Cambridge Analytica controversy is potentially huge for researchers who rely on social networks for their studies, where data is routinely shared with them for research purposes. Tech companies could become more reluctant to share data with researchers. Facebook is already extremely protective of its data – the worry is that it could become doubly difficult for researchers to legitimately access this information in light of what has happened with Cambridge Analytica….(More)”.

How Refugees Are Helping Create Blockchain’s Brand New World


Jessi Hempel at Wired: “Though best known for underpinning volatile cryptocurrencies, like Bitcoin and Ethereum, blockchain technology has a number of qualities which make it appealing for record-keeping. A distributed ledger doesn’t depend on a central authority to verify its existence, or to facilitate transactions within it, which makes it less vulnerable to tampering. By using applications that are built on the ‘chain, individuals may be able to build up records over time, use those records across borders as a form of identity—essentially creating the trust they need to interact with the world, without depending on a centralized authority, like a government or a bank, to vouch for them.

For now, these efforts are small experiments. In Finland, the Finnish Immigration Service offers refugees a prepaid Mastercard developed by the Helsinki-based startup MONI that also links to a digital identity, composed of the record of one’s financial transactions, which is stored on the blockchain. In Moldova, the government is working with digital identification expertsfrom the United Nations Office for Project Services (UNOPS) to brainstorm ways to use blockchain to provide children living in rural areas with a digital identity, so it’s more difficult for traffickers to smuggle them across borders.

Among the more robust programs is a pilot the United Nations World Food Program (WFP) launched in Jordan last May. Syrian refugees stationed at the Azraq Refugee Camp receive vouchers to shop at the local grocery store. The WFP integrated blockchain into its biometric authentication technology, so Syrian refugees can cash in their vouchers at the supermarket by staring into a retina scanner. These transactions are recorded on a private Ethereum-basedblockchain, called Building Blocks. Because the blockchain eliminates the need for WFP to pay banks to facilitate transactions, Building Blocks could save the WFP as much as $150,000 each month in bank fees in Jordan alone. The program has been so successful that by the end of the year, the WFP plans to expand the technology throughout Jordan. Blockchain enthusiasts imagine a future in which refugees can access more than just food vouchers, accumulating a transaction history that could stand in as a credit history when they attempt to resettle….

But in the rush to apply blockchain technology to every problem, many point out that relying on the ledger may have unintended consequences. As the Blockchain for Social Impact chief technology officer at ConsenSys, Robert Greenfeld IV writes, blockchain-based identity “isn’t a silver bullet, and if we don’t think about it/build it carefully, malicious actors could still capitalize on it as an element of control.” If companies rely on private blockchains, he warns, there’s a danger that the individual permissions will prevent these identity records from being used in multiple places. (Many of these projects, like the UNWFP project, are built on private blockchains so that organizations can exert more control over their development.) “If we don’t start to collaborate together with populations, we risk ending up with a bunch of siloed solutions,” says Greenfeld.

For his part, Greenfeld suggests governments could easily use state-sponsored machine learning algorithms to monitor public blockchain activity. But as bitcoin enthusiasts branch out of their get-rich-quick schemes to wrestle with how to make the web more equitable for everyone, they have the power to craft a world of their own devising. The early web should be a lesson to the bitcoin enthusiasts as they promote the blockchain’s potential. Right now we have the power to determine its direction; the dangers exist, but the potential is enormous….(More)”

Artificial Intelligence and the Need for Data Fairness in the Global South


Medium blog by Yasodara Cordova: “…The data collected by industry represents AI opportunities for governments, to improve their services through innovation. Data-based intelligence promises to increase the efficiency of resource management by improving transparency, logistics, social welfare distribution — and virtually every government service. E-government enthusiasm took of with the realization of the possible applications, such as using AI to fight corruption by automating the fraud-tracking capabilities of cost-control tools. Controversially, the AI enthusiasm has spread to the distribution of social benefits, optimization of tax oversight and control, credit scoring systems, crime prediction systems, and other applications based in personal and sensitive data collection, especially in countries that do not have comprehensive privacy protections.

There are so many potential applications, society may operate very differently in ten years when the “datafixation” has advanced beyond citizen data and into other applications such as energy and natural resource management. However, many countries in the Global South are not being given necessary access to their countries’ own data.

Useful data are everywhere, but only some can take advantage. Beyond smartphones, data can be collected from IoT components in common spaces. Not restricted to urban spaces, data collection includes rural technology like sensors installed in tractors. However, even when the information is related to issues of public importance in developing countries —like data taken from road mesh or vital resources like water and land — it stays hidden under contract rules and public citizens cannot access, and therefore take benefit, from it. This arrangement keeps the public uninformed about their country’s operations. The data collection and distribution frameworks are not built towards healthy partnerships between industry and government preventing countries from realizing the potential outlined in the previous paragraph.

The data necessary to the development of better cities, public policies, and common interest cannot be leveraged if kept in closed silos, yet access often costs more than is justifiable. Data are a primordial resource to all stages of new technology, especially tech adoption and integration, so the necessary long term investment in innovation needs a common ground to start with. The mismatch between the pace of the data collection among big established companies and small, new, and local businesses will likely increase with time, assuming no regulation is introduced for equal access to collected data….

Currently, data independence remains restricted to discussions on the technological infrastructure that supports data extraction. Privacy discussions focus on personal data rather than the digital accumulation of strategic data in closed silos — a necessary discussion not yet addressed. The national interest of data is not being addressed in a framework of economic and social fairness. Access to data, from a policy-making standpoint, needs to find a balance between the extremes of public, open access and limited, commercial use.

A final, but important note: the vast majority of social media act like silos. APIs play an important role in corporate business models, where industry controls the data it collects without reward, let alone user transparency. Negotiation of the specification of APIs to make data a common resource should be considered, for such an effort may align with the citizens’ interest….(More)”.

International Development Doesn’t Care About Patient Privacy


Yogesh Rajkotia at the Stanford Social Innovation Review: “In 2013, in southern Mozambique, foreign NGO workers searched for a man whom the local health facility reported as diagnosed with HIV. The workers aimed to verify that the health facility did indeed diagnose and treat him. When they could not find him, they asked the village chief for help. Together with an ever-growing crowd of onlookers, the chief led them to the man’s home. After hesitating and denying, he eventually admitted, in front of the crowd, that he had tested positive and received treatment. With his status made public, he now risked facing stigma, discrimination, and social marginalization. The incident undermined both his health and his ability to live a dignified life.

Similar privacy violations were documented in Burkina Faso in 2016, where community workers asked partners, in the presence of each other, to disclose what individual health services they had obtained.

Why was there such a disregard for the privacy and dignity of these citizens?

As it turns out, unbeknownst to these Mozambican and Burkinabé patients, their local health centers were participating in performance-based financing (PBF) programs financed by foreign assistance agencies. Implemented in more than 35 countries, PBF programs offer health workers financial bonuses for delivering priority health interventions. To ensure that providers do not cheat the system, PBF programs often send verifiers to visit patients’ homes to confirm that they have received specific health services. These verifiers are frequently community members (the World Bank callously notes in its “Performance-Based Financing Toolkit” that even “a local soccer club” can play this role), and this practice, known as “patient tracing,” is common among PBF programs. In World Bank-funded PBF programs alone, 19 out of the 25 PBF programs implement patient tracing. Yet the World Bank’s toolkit never mentions patient privacy or confidentiality. In patient tracing, patients’ rights and dignity are secondary to donor objectives.

Patient tracing within PBF programs is just one example of a bigger problem: Privacy violations are pervasive in global health. Some researchers and policymakers have raised privacy concerns about tuberculosis (TB), human immunodeficiency virus (HIV), family planningpost-abortion care, and disease surveillance programsA study conducted by the Asia-Pacific Network of People Living with HIV/AIDS found that 34 percent of people living with HIV in India, Indonesia, Philippines, and Thailand reported that health workers breached confidentiality. In many programs, sensitive information about people’s sexual and reproductive health, disease status, and other intimate health details are often collected to improve health system effectiveness and efficiency. Usually, households have no way to opt out, nor any control over how heath care programs use, store, and disseminate this data. At the same time, most programs do not have systems to enforce health workers’ non-disclosure of private information.

In societies with strong stigma around certain health topics—especially sexual and reproductive health—the disclosure of confidential patient information can destroy lives. In contexts where HIV is highly stigmatized, people living with HIV are 2.4 times more likely to delay seeking care until they are seriously ill. In addition to stigma’s harmful effects on people’s health, it can limit individuals’ economic opportunities, cause them to be socially marginalized, and erode their psychological wellbeing….(More)”.

As If: Idealization and Ideals


Book by Kwame Anthony Appiah: “Idealization is a fundamental feature of human thought. We build simplified models in our scientific research and utopias in our political imaginations. Concepts like belief, desire, reason, and justice are bound up with idealizations and ideals. Life is a constant adjustment between the models we make and the realities we encounter. In idealizing, we proceed “as if” our representations were true, while knowing they are not. This is not a dangerous or distracting occupation, Kwame Anthony Appiah shows. Our best chance of understanding nature, society, and ourselves is to open our minds to a plurality of imperfect depictions that together allow us to manage and interpret our world.

The philosopher Hans Vaihinger first delineated the “as if” impulse at the turn of the twentieth century, drawing on Kant, who argued that rational agency required us to act as if we were free. Appiah extends this strategy to examples across philosophy and the human and natural sciences. In a broad range of activities, we have some notion of the truth yet continue with theories that we recognize are, strictly speaking, false. From this vantage point, Appiah demonstrates that a picture one knows to be unreal can be a vehicle for accessing reality.

As If explores how strategic untruth plays a critical role in far-flung areas of inquiry: decision theory, psychology, natural science, and political philosophy. A polymath who writes with mainstream clarity, Appiah defends the centrality of the imagination not just in the arts but in science, morality, and everyday life…(More)”.

Law, Metaphor, and the Encrypted Machine


Paper by Lex Gill: “The metaphors we use to imagine, describe and regulate new technologies have profound legal implications. This paper offers a critical examination of the metaphors we choose to describe encryption technology in particular, and aims to uncover some of the normative and legal implications of those choices.

Part I provides a basic description of encryption as a mathematical and technical process. At the heart of this paper is a question about what encryption is to the law. It is therefore fundamental that readers have a shared understanding of the basic scientific concepts at stake. This technical description will then serve to illustrate the host of legal and political problems arising from encryption technology, the most important of which are addressed in Part II. That section also provides a brief history of various legislative and judicial responses to the encryption “problem,” mapping out some of the major challenges still faced by jurists, policymakers and activists. While this paper draws largely upon common law sources from the United States and Canada, metaphor provides a core form of cognitive scaffolding across legal traditions. Part III explores the relationship between metaphor and the law, demonstrating the ways in which it may shape, distort or transform the structure of legal reasoning. Part IV demonstrates that the function served by legal metaphor is particularly determinative wherever the law seeks to integrate novel technologies into old legal frameworks. Strong, ubiquitous commercial encryption has created a range of legal problems for which the appropriate metaphors remain unfixed. Part V establishes a loose framework for thinking about how encryption has been described by courts and lawmakers — and how it could be. What does it mean to describe the encrypted machine as a locked container or building? As a combination safe? As a form of speech? As an untranslatable library or an unsolvable puzzle? What is captured by each of these cognitive models, and what is lost? This section explores both the technological accuracy and the legal implications of each choice. Finally, the paper offers a few concluding thoughts about the utility and risk of metaphor in the law, reaffirming the need for a critical, transparent and lucid appreciation of language and the power it wields….(More)”.

Co-creation in Urban Governance: From Inclusion to Innovation


Paper by Dorthe Hedensted Lund: “This article sets out to establish what we mean by the recent buzzword ‘co-creation’ and what practical application this concept entails for democracy in urban governance, both in theory and practice.

The rise of the concept points to a shift in how public participation is understood. Whereas from the 1970s onwards the discussions surrounding participation centred on rights and power, following Sherry Arnstein, participation conceptualised as co-creation instead focuses on including diverse forms of knowledge in urban processes in order to create innovative solutions to complex problems.

Consequently, democratic legitimacy now relies to a much greater extent on output, rather than input legitimacy. Rather than provision of inclusive spaces for democratic debate and empowerment of the deprived, which have been the goals of numerous urban participatory efforts in the past, it is the ability to solve complex problems that has become the main criterion for the evaluation of co-creation. Furthermore, conceptualising participation as co-creation has consequences for the roles available to both citizens and public administrators in urban processes, which has implications for urban governance. An explicit debate, both in academia and in practice, about the normative content and implications of conceptualising participation as co-creation is therefore salient and necessary….(More).