Paper by A humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence (AI). Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of AI robots, we analyse the performativity of Sophia from the perspective of what we call ‘political choreography’: drawing on phenomenological approaches to performance-oriented philosophy of technology. This paper proposes to interpret and discuss the world tour of Sophia as a political choreography that boosts the rise of the social robot market, rather than a statement about robot citizenship or artificial intelligence. We argue that the media performances of the Sophia robot were choreographed to advance specific political interests. We illustrate our philosophical discussion with media material of the Sophia performance, which helps us to explore the mechanisms through which the media spectacle functions hand in hand with advancing the economic interests of technology industries and their governmental promotors. Using a phenomenological approach and attending to the movement of robots, we also criticize the notion of ‘embodied intelligence’ used in the context of social robotics and AI. In this way, we put the discussions about the robot’s rights or citizenship in the context of AI politics and economics….(More)”
Freedom House: “The coronavirus pandemic is accelerating a dramatic decline in global internet freedom. For the 10th consecutive year, users have experienced an overall deterioration in their rights, and the phenomenon is contributing to a broader crisis for democracy worldwide.
In the COVID-19 era, connectivity is not a convenience, but a necessity. Virtually all human activities—commerce, education, health care, politics, socializing—seem to have moved online. But the digital world presents distinct challenges for human rights and democratic governance. State and nonstate actors in many countries are now exploiting opportunities created by the pandemic to shape online narratives, censor critical speech, and build new technological systems of social control.
Three notable trends punctuated an especially dismal year for internet freedom. First, political leaders used the pandemic as a pretext to limit access to information. Authorities often blocked independent news sites and arrested individuals on spurious charges of spreading false news. In many places, it was state officials and their zealous supporters who actually disseminated false and misleading information with the aim of drowning out accurate content, distracting the public from ineffective policy responses, and scapegoating certain ethnic and religious communities. Some states shut off connectivity for marginalized groups, extending and deepening existing digital divides. In short, governments around the world failed in their obligation to promote a vibrant and reliable online public sphere.
Second, authorities cited COVID-19 to justify expanded surveillance powers and the deployment of new technologies that were once seen as too intrusive. The public health crisis has created an opening for the digitization, collection, and analysis of people’s most intimate data without adequate protections against abuses. Governments and private entities are ramping up their use of artificial intelligence (AI), biometric surveillance, and big-data tools to make decisions that affect individuals’ economic, social, and political rights. Crucially, the processes involved have often lacked transparency, independent oversight, and avenues for redress. These practices raise the prospect of a dystopian future in which private companies, security agencies, and cybercriminals enjoy easy access not only to sensitive information about the places we visit and the items we purchase, but also to our medical histories, facial and voice patterns, and even our genetic codes.
The third trend has been the transformation of a slow-motion “splintering” of the internet into an all-out race toward “cyber sovereignty,” with each government imposing its own internet regulations in a manner that restricts the flow of information across national borders. For most of the period since the internet’s inception, business, civil society, and government stakeholders have participated in a consensus-driven process to harmonize technical protocols, security standards, and commercial regulation around the world. This approach allowed for the connection of billions of people to a global network of information and services, with immeasurable benefits for human development, including new ways to hold powerful actors to account….(More)“
Paper by Jennifer Allen, Baird Howland, Markus Mobius, David Rothschild and Duncan J. Watts: “Fake news,” broadly defined as false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive online with serious consequences for democracy. Using a unique multimode dataset that comprises a nationally representative sample of mobile, desktop, and television consumption, we refute this conventional wisdom on three levels. First, news consumption of any sort is heavily outweighed by other forms of media consumption, comprising at most 14.2% of Americans’ daily media diets. Second, to the extent that Americans do consume news, it is overwhelmingly from television, which accounts for roughly five times as much as news consumption as online. Third, fake news comprises only 0.15% of Americans’ daily media diet. Our results suggest that the origins of public misinformedness and polarization are more likely to lie in the content of ordinary news or the avoidance of news altogether as they are in overt fakery….(More)”.
UK Government: “This toolkit will help support the dissemination of reliable, truthful information that underpins our democracy. RESIST stands for (Recognise disinformation, Early warning, Situational Insight, Impact analysis, Strategic communication, Track outcomes).
This toolkit will:
- build your resilience to the threat of disinformation
- give you guidance on how to identify a range of different types of disinformation consistently and effectively
- help you prevent and tackle the spread of disinformation
- enable you to develop a response when disinformation affects your organisation’s ability to do its job or represents a threat to the general public.
The toolkit promotes a consistent approach to the threat and provides 6 steps to follow.
RESIST Disinformation: a toolkit
The purpose of this toolkit is to help you prevent the spread of disinformation. It will enable you to develop a response when disinformation affects your organisation’s ability to do its job, the people who depend on your services, or represents a threat to the general public.
What is disinformation?
Disinformation is the deliberate creation and/or sharing of false information with the intention to deceive and mislead audiences. The inadvertent sharing of false information is referred to as misinformation.
Who is this toolkit for?
Government and public sector communications professionals, as well as policy officers, senior managers and special advisers….(More)”
Report by Craig Matasick: “…innovative new set of citizen engagement practices—collectively known as deliberative democracy—offers important lessons that, when applied to the media development efforts, can help improve media assistance efforts and strengthen independent media environments around the world. At a time when disinformation runs rampant, it is more important than ever to strengthen public demand for credible information, reduce political polarization, and prevent media capture. Deliberative democracy approaches can help tackle these issues by expanding the number and diversity of voices that participate in policymaking, thereby fostering greater collective action and enhancing public support for media reform efforts.
Through a series of five illustrative case studies, the report demonstrates how deliberative democracy practices can be employed in both media development and democracy assistance efforts, particularly in the Global South. Such initiatives produce recommendations that take into account a plurality of voices while building trust between citizens and decision-makers by demonstrating to participants that their issues will be heard and addressed. Ultimately, this process can enable media development funders and practitioners to identify priorities and design locally relevant projects that have a higher likelihood for long-term impact.
– Deliberative democracy approaches, which are characterized by representative participation and moderated deliberation, provide a framework to generate demand-driven media development interventions while at the same time building greater public support for media reform efforts.
– Deliberative democracy initiatives foster collaboration across different segments of society, building trust in democratic institutions, combatting polarization, and avoiding elite capture.
– When employed by news organizations, deliberative approaches provide a better understanding of the issues their audiences care most about and uncover new problems affecting citizens that might not otherwise have come to light….(More)”.
Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.
Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.
Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide….(More)”.
Paper by Ciara Greene and Gillian Murphy: “Previous research has argued that fake news may have grave consequences for health behaviour, but surprisingly, no empirical data have been provided to support this assumption. This issue takes on new urgency in the context of the coronavirus pandemic. In this large preregistered study (N = 3746) we investigated the effect of exposure to fabricated news stories about COVID-19 on related behavioural intentions. We observed small but measurable effects on some related behavioural intentions but not others – for example, participants who read a story about problems with a forthcoming contact-tracing app reported reduced willingness to download the app. We found no effects of providing a general warning about the dangers of online misinformation on response to the fake stories, regardless of the framing of the warning in positive or negative terms. We conclude with a call for more empirical research on the real-world consequences of fake news….(More)”
Report by Paul M. Barrett: “Recently, Section 230 of the Communications Decency Act of 1996 has come under sharp attack from members of both political parties, including presidential candidates Donald Trump and Joe Biden. The foundational law of the commercial internet, Section 230 does two things: It protects platforms and websites from most lawsuits related to content posted by third parties. And it guarantees this shield from liability even if the platforms and sites actively police the content they host. This protection has encouraged internet companies to innovate and grow, even as it has raised serious questions about whether social media platforms adequately self-regulate harmful content. In addition to the assaults by Trump and Biden, members of Congress have introduced a number of bills designed to limit the reach of Section 230. Some critics have asserted unrealistically that repealing or curbing Section 230 would solve a wide range of problems relating to internet governance. These critics also have played down the potentialy dire consequences that repeal would have for smaller internet companies. Academics, think tank researchers, and others outside of government have made a variety of more nuanced proposals for revising the law. We assess these ideas with an eye toward recommending and integrating the most promising ones. Our conclusion is that Section 230 ought to be preserved—but that it can be improved…(More)”
Paper by Aline Blankertz: “A small number of large digital platforms increasingly shape the space for most online interactions around the globe and they often act with hardly any constraint from competing services. The lack of competition puts those platforms in a powerful position that may allow them to exploit consumers and offer them limited choice. Privacy is increasingly considered one area in which the lack of competition may create harm. Because of these concerns, governments and other institutions are developing proposals to expand the scope for competition authorities to intervene to limit the power of the large platforms and to revive competition.
The first case that has explicitly addressed anticompetitive harm to privacy is the German Bundeskartellamt’s case against Facebook in which the authority argues that imposing bad privacy terms can amount to an abuse of dominance. Since that case started in 2016, more cases deal with the link between competition and privacy. For example, the proposed Google/Fitbit merger has raised concerns about sensitive health data being merged with existing Google profiles and Apple is under scrutiny for not sharing certain personal data while using it for its own services.
However, addressing bad privacy outcomes through competition policy is effective only if those outcomes are caused, at least partly, by a lack of competition. Six distinct mechanisms can be distinguished through which competition may affect privacy, as summarized in Table 1. These mechanisms constitute different hypotheses through which less competition may influence privacy outcomes and lead either to worse privacy in different ways (mechanisms 1-5) or even better privacy (mechanism 6). The table also summarizes the available evidence on whether and to what extent the hypothesized effects are present in actual markets….(More)”.
OECD paper by Craig Matasick, Carlotta Alfonsi and Alessandro Bellantoni: “This paper provides a holistic policy approach to the challenge of disinformation by exploring a range of governance responses that rest on the open government principles of transparency, integrity, accountability and stakeholder participation. It offers an analysis of the significant changes that are affecting media and information ecosystems, chief among them the growth of digital platforms. Drawing on the implications of this changing landscape, the paper focuses on four policy areas of intervention: public communication for a better dialogue between government and citizens; direct responses to identify and combat disinformation; legal and regulatory policy; and media and civic responses that support better information ecosystems. The paper concludes with proposed steps the OECD can take to build evidence and support policy in this space…(More)”.