Friendship, Robots, and Social Media: False Friends and Second Selves


Book by Alexis M. Elder: “Various emerging technologies, from social robotics to social media, appeal to our desire for social interactions, while avoiding some of the risks and costs of face-to-face human interaction. But can they offer us real friendship? In this book, Alexis Elder outlines a theory of friendship drawing on Aristotle and contemporary work on social ontology, and then uses it to evaluate the real value of social robotics and emerging social technologies.

In the first part of the book Elder develops a robust and rigorous ontology of friendship: what it is, how it functions, what harms it, and how it relates to familiar ethical and philosophical questions about character, value, and well-being. In Part II she applies this ontology to emerging trends in social robotics and human-robot interaction, including robotic companions for lonely seniors, therapeutic robots used to teach social skills to children on the autism spectrum, and companionate robots currently being developed for consumer markets. Elder articulates the moral hazards presented by these robots, while at the same time acknowledging their real and measurable benefits. In the final section she shifts her focus to connections between real people, especially those enabled by social media. Arguing against critics who have charged that these new communication technologies are weakening our social connections, Elder explores ways in which text messaging, video chats, Facebook, and Snapchat are enabling us to develop, sustain, and enrich our friendship in new and meaningful ways….(More)”.

Humanitarian group uses blockchain tech to give Rohingya digital ID cards


Techwire Asia: “A Non-Governmental Organization is using blockchain technology to provide stateless Rohingya refugees who fled Burma (Myanmar) with digital identity cards in a pilot project aimed at giving access to services like banking and education.

The first 1,000 people to benefit from the project in 2018 will be members of the diaspora in Malaysia, Bangladesh and Saudi Arabia, decades-old safe havens for the Rohingya, who are the world’s biggest stateless minority.

“They are disenfranchised,” Kyri Andreou, co-founder of The Rohingya Project, which is organising the initiative, said at its launch in Kuala Lumpur on Wednesday.

“They are shut out. One of the key aspects is because of the lack of identification.”

More than 650,000 Rohingya Muslims – who are denied citizenship in Buddhist-majority Burma – have fled to Bangladesh since August after attacks by insurgents triggered a response by Burma’s army and Buddhist vigilantes….

According to The Sun, Muhammad Noor said the project focuses on two aspects – identity and opportunity – in which the system will provide the first verified data on Rohingya census across the world.

Individual Rohingya, he said, shall have their ancestry authentically identified to link them directly to their original land of dispersion…(More)”.

Psychopolitics: Neoliberalism and New Technologies of Power


Review by Stuart Jeffries of new book by Byung-Chul Han: “During a commercial break in the 1984 Super Bowl, Apple broadcast an ad directed by Ridley Scott. Glum, grey workers sat in a vast grey hall listening to Big Brother’s declamations on a huge screen. Then a maverick athlete-cum-Steve-Jobs-lackey hurled a sledgehammer at the screen, shattering it and bathing workers in healing light. “On January 24th,” the voiceover announced, “Apple Computer will introduce the Macintosh. And you’ll see why 1984 won’t be like [Orwell’s] Nineteen Eighty-Four.”

The ad’s idea, writes Korean-born German philosopher Byung-Chul Han, was that the Apple Mac would liberate downtrodden masses from the totalitarian surveillance state. And indeed, the subsequent rise of Apple, the internet, Twitter, Facebook, Amazon and Google Glass means that today we live in nothing like the nightmare Orwell imagined. After all, Big Brother needed electroshock, sleep deprivation, solitary confinement, drugs and hectoring propaganda broadcasts to keep power, while his Ministry of Plenty ensured that consumer goods were lacking to make sure subjects were in an artificial state of need.

The new surveillance society that has arisen since 1984, argues Han, works differently yet is more elegantly totalitarian and oppressive than anything described by Orwell or Jeremy Bentham. “Confession obtained by force has been replaced by voluntary disclosure,” he writes. “Smartphones have been substituted for torture chambers.” Well, not quite. Torture chambers still exist, it’s just that we in the neoliberal west have outsourced them (thanks, rendition flights) so that that obscenity called polite society can pretend they don’t exist.

Nonetheless, what capitalism realised in the neoliberal era, Han argues, is that it didn’t need to be tough, but seductive. This is what he calls smartpolitics. Instead of saying no, it says yes: instead of denying us with commandments, discipline and shortages, it seems to allow us to buy what we want when we want, become what we want and realise our dream of freedom. “Instead of forbidding and depriving it works through pleasing and fulfilling. Instead of making people compliant, it seeks to make them dependent.”…(More)”.

Innovation Contests: How to Engage Citizens in Solving Urban Problems?


Chapter by Sarah Hartmann, Agnes Mainka and Wolfgang G. Stock in Enhancing Knowledge Discovery and Innovation in the Digital Era: “Cities all over the world are challenged with problems evolving from increasing urbanity, population growth, and density. For example, one prominent issue that is addressed in many cities is mobility. To develop smart city solutions, governments are trying to introduce open innovation. They have started to open their governmental and city related data as well as awake the citizens’ awareness on urban problems through innovation contests.

Citizens are the users of the city and therefore, have a practical motivation to engage in innovation contests as for example in hackathons and app competitions. The collaboration and co-creation of civic services by means of innovation contests is a cultural development of how governments and citizens work together in an open governmental environment. A qualitative analysis of innovation contests in 24 world cities reveals this global trend. In particular, such events increase the awareness of citizens and local businesses for identifying and solving urban challenges and are helpful means to transfer the smart city idea into practicable solutions….(More)”.

Big Data Challenge for Social Sciences: From Society and Opinion to Replications


Symposium Paper by Dominique Boullier: “When in 2007 Savage and Burrows pointed out ‘the coming crisis of empirical methods’, they were not expecting to be so right. Their paper however became a landmark, signifying the social sciences’ reaction to the tremendous shock triggered by digital methods. As they frankly acknowledge in a more recent paper, they did not even imagine the extent to which their prediction might become true, in an age of Big Data, where sources and models have to be revised in the light of extended computing power and radically innovative mathematical approaches.They signalled not just a debate about academic methods but also a momentum for ‘commercial sociology’ in which platforms acquire the capacity to add ‘another major nail in the coffin of academic sociology claims to jurisdiction over knowledge of the social’, because ‘research methods (are) an intrinsic feature of contemporary capitalist organisations’ (Burrows and Savage, 2014, p. 2). This need for a serious account of research methods is well tuned with the claims of Social Studies of Science that should be applied to the social sciences as well.

I would like to build on these insights and principles of Burrows and Savage to propose an historical and systematic account of quantification during the last century, following in the footsteps of Alain Desrosières, and in which we see Big Data and Machine Learning as a major shift in the way social science can be performed. And since, according to Burrows and Savage (2014, p. 5), ‘the use of new data sources involves a contestation over the social itself’, I will take the risk here of identifying and defining the entities that are supposed to encapsulate the social for each kind of method: beyond the reign of ‘society’ and ‘opinion’, I will point at the emergence of the ‘replications’ that are fabricated by digital platforms but are radically different from previous entities. This is a challenge to invent not only new methods but also a new process of reflexivity for societies, made available by new stakeholders (namely, the digital platforms) which transform reflexivity into reactivity (as operational quantifiers always tend to)….(More)”.

Even Imperfect Algorithms Can Improve the Criminal Justice System


Sam Corbett-Davies, Sharad Goel and Sandra González-Bailón in the The New York Times: “In courtrooms across the country, judges turn to computer algorithms when deciding whether defendants awaiting trial must pay bail or can be released without payment. The increasing use of such algorithms has prompted warnings about the dangers of artificial intelligence. But research shows that algorithms are powerful tools for combating the capricious and biased nature of human decisions.

Bail decisions have traditionally been made by judges relying on intuition and personal preference, in a hasty process that often lasts just a few minutes. In New York City, the strictest judges are more than twice as likely to demand bail as the most lenient ones.

To combat such arbitrariness, judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released. Judges are free to exercise discretion, but algorithms bring a measure of consistency and evenhandedness to the process.

The use of these algorithms often yields immediate and tangible benefits: Jail populations, for example, can decline without adversely affecting public safety.

In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime….(More)”.

Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda


Gillian Bolsover and Philip Howard in the Journal Big Data: “Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon.

However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Big-data research is necessary to understand the socio-technical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates.

Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means…(More) (Full Text HTMLFull Text PDF)

Could Bitcoin technology help science?


Andy Extance at Nature: “…The much-hyped technology behind Bitcoin, known as blockchain, has intoxicated investors around the world and is now making tentative inroads into science, spurred by broad promises that it can transform key elements of the research enterprise. Supporters say that it could enhance reproducibility and the peer review process by creating incorruptible data trails and securely recording publication decisions. But some also argue that the buzz surrounding blockchain often exceeds reality and that introducing the approach into science could prove expensive and introduce ethical problems.

A few collaborations, including Scienceroot and Pluto, are already developing pilot projects for science. Scienceroot aims to raise US$20 million, which will help pay both peer reviewers and authors within its electronic journal and collaboration platform. It plans to raise the funds in early 2018 by exchanging some of the science tokens it uses for payment for another digital currency known as ether. And the Wolfram Mathematica algebra program — which is widely used by researchers — is currently working towards offering support for an open-source blockchain platform called Multichain. Scientists could use this, for example, to upload data to a shared, open workspace that isn’t controlled by any specific party, according to Multichain….

Claudia Pagliari, who researches digital health-tracking technologies at the University of Edinburgh, UK, says that she recognizes the potential of blockchain, but researchers have yet to properly explore its ethical issues. What happens if a patient withdraws consent for a trial that is immutably recorded on a blockchain? And unscrupulous researchers could still add fake data to a blockchain, even if the process is so open that everyone can see who adds it, says Pagliari. Once added, no-one can change that information, although it’s possible they could label it as retracted….(More)”.

New York City moves to create accountability for algorithms


Lauren Kirchner at ArsTechnica: “The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination—the first measure of its kind in the country.

The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers’ lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation, or citizenship status. The task force’s report will also explore how to make these decision-making processes understandable to the public.

The bill’s sponsor, Council Member James Vacca, said he was inspired by ProPublica’s investigation into racially biased algorithms used to assess the criminal risk of defendants….

A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for “targeting services” or “imposing penalties upon persons or policing” and to make them available for “self-testing” by the public. At a hearing at City Hall in October, representatives from the mayor’s office expressed concerns that this mandate would threaten New Yorkers’ privacy and the government’s cybersecurity.

The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city’s forensic methods, including controversial tools that the chief medical examiner’s office crime lab has used for difficult-to-analyze samples of DNA.

As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.

The software, called the Forensic Statistical Tool, or FST, has never been adopted by any other lab in the country….(More)”.

Normative Challenges of Identification in the Internet of Things: Privacy, Profiling, Discrimination, and the GDPR


Paper by Sandra Wachter: “In the Internet of Things (IoT), identification and access control technologies provide essential infrastructure to link data between a user’s devices with unique identities, and provide seamless and linked up services. At the same time, profiling methods based on linked records can reveal unexpected details about users’ identity and private life, which can conflict with privacy rights and lead to economic, social, and other forms of discriminatory treatment. A balance must be struck between identification and access control required for the IoT to function and user rights to privacy and identity. Striking this balance is not an easy task because of weaknesses in cybersecurity and anonymisation techniques.

The EU General Data Protection Regulation (GDPR), set to come into force in May 2018, may provide essential guidance to achieve a fair balance between the interests of IoT providers and users. Through a review of academic and policy literature, this paper maps the inherit tension between privacy and identifiability in the IoT.

It focuses on four challenges: (1) profiling, inference, and discrimination; (2) control and context-sensitive sharing of identity; (3) consent and uncertainty; and (4) honesty, trust, and transparency. The paper will then examine the extent to which several standards defined in the GDPR will provide meaningful protection for privacy and control over identity for users of IoT. The paper concludes that in order to minimise the privacy impact of the conflicts between data protection principles and identification in the IoT, GDPR standards urgently require further specification and implementation into the design and deployment of IoT technologies….(More)”.