Stefaan Verhulst
Paper by Molly K. Land and Rebecca J. Hamilton: “The current preoccupation with ‘fake news’ has spurred a renewed emphasis in popular discourse on the potential harms of speech. In the world of international law, however, ‘fake news’ is far from new. Propaganda of various sorts is a well-worn tactic of governments, and in its most insidious form, it has played an instrumental role in inciting and enabling some of the worst atrocities of our time. Yet as familiar as propaganda might be in theory, it is raising new issues as it has migrated to the digital realm. Technological developments have largely outpaced existing legal and political tools for responding to the use of mass communications devices to instigate or perpetrate human rights violations.
This chapter evaluates the current practices of social media companies for responding to online hate, arguing that they are inevitably both overbroad and under-inclusive. Using the example of the role played by Facebook in the recent genocide against the minority Muslim Rohingya population in Myanmar, the chapter illustrates the failure of platform hate speech policies to address pervasive and coordinated online speech, often state-sponsored or state-aligned, denigrating a particular group that is used to justify or foster impunity for violence against that group. Addressing this “conditioning speech” requires a more tailored response that includes remedies other than content removal and account suspensions. The chapter concludes by surveying a range of innovative responses to harmful online content that would give social media platforms the flexibly to intervene earlier, but with a much lighter touch….(More)”.
Article by Geoff Shullenberger on “How fears of mind control went from paranoid delusion to conventional wisdom”: “In early 2017, after the double shock of Brexit and the election of Donald Trump, the British data-mining firm Cambridge Analytica gained sudden notoriety. The previously little-known company, reporters claimed, had used behavioral influencing techniques to turn out social media users to vote in both elections. By its own account, Cambridge Analytica had worked with both campaigns to produce customized propaganda for targeting individuals on Facebook likely to be swept up in the tide of anti-immigrant populism. Its methods, some news sources suggested, might have sent enough previously disengaged voters to the polls to have tipped the scales in favor of the surprise victors. To a certain segment of the public, this story seemed to answer the question raised by both upsets: How was it possible that the seemingly solid establishment consensus had been rejected? What’s more, the explanation confirmed everything that seemed creepy about the Internet, evoking a sci-fi vision of social media users turned into an army of political zombies, mobilized through subliminal manipulation.
Cambridge Analytica’s violations of Facebook users’ privacy have made it an enduring symbol of the dark side of social media. However, the more dramatic claims about the extent of the company’s political impact collapse under closer scrutiny, mainly because its much-hyped “psychographic targeting” methods probably don’t work. As former Facebook product manager Antonio García Martínez noted in a 2018 Wired article, “the public, with no small help from the media sniffing a great story, is ready to believe in the supernatural powers of a mostly unproven targeting strategy,” but “most ad insiders express skepticism about Cambridge Analytica’s claims of having influenced the election, and stress the real-world difficulty of changing anyone’s mind about anything with mere Facebook ads, least of all deeply ingrained political views.” According to García, the entire affair merely confirms a well-established truth: “In the ads world, just because a product doesn’t work doesn’t mean you can’t sell it….(More)”.
Chapter by Vikramsinh Amarsinh Patil: “This chapter examines the theoretical underpinnings of nudge theory and makes a case for incorporating nudging into the decision-making process in corporate contexts. Nudging and more broadly behavioural economics have become buzzwords on account of the seminal work that has been done by economists and highly publicized interventions employed by governments to support national priorities. Firms are not to be left behind, however. What follows is extensive documentation of such firms that have successfully employed nudging techniques. The examples are segmented by the nudge recipient, namely – managers, employees, and consumers. Firms can guide managers to become better leaders, employees to become more productive, and consumers to stay loyal. However, nudging is not without its pitfalls. It can be used towards nefarious ends and be notoriously difficult to implement and execute. Therefore, nudges should be rigorously tested via experimentation and should be ethically sound….(More)”.
Paper by Oren Perez: “… focuses on “deliberative e-rulemaking”: digital consultation processes that seek to facilitate public deliberation over policy or regulatory proposals. The main challenge of е-rulemaking platforms is to support an “intelligent” deliberative process that enables decision makers to identify a wide range of options, weigh the relevant considerations, and develop epistemically responsible solutions. This article discusses and critiques two approaches to this challenge: The Cornell Regulation Room project and model of computationally assisted regulatory participation by Livermore et al. It then proceeds to explore two alternative approaches to e-rulemaking: One is based on the implementation of collaborative, wiki-styled tools. This article discusses the findings of an experiment, which was conducted at Bar-Ilan University and explored various aspects of a wiki-based collaborative е-rulemaking system. The second approach follows a more futuristic approach, focusing on the potential development of autonomous, artificial democratic agents. This article critically discusses this alternative, also in view of the recent debate regarding the idea of “augmented democracy.”…(More)”.
Open Access Book edited by Vito Bobek: “Debates about the future of urban development in many countries have been increasingly influenced by discussions of smart cities. Despite numerous examples of this “urban labelling” phenomenon, we know surprisingly little about so-called smart cities. This book provides a preliminary critical discussion of some of the more important aspects of smart cities. Its primary focus is on the experience of some designated smart cities, with a view to problematizing a range of elements that supposedly characterize this new urban form. It also questions some of the underlying assumptions and contradictions hidden within the concept….(More)”.
European Commission: “Data can solve problems from traffic jams to disaster relief, but European countries are not yet using this data to its full potential, experts say in a report released today. More secure and regular data sharing across the EU could help public administrations use private sector data for the public good.
In order to increase Business-to-Government (B2G) data sharing, the experts advise to make data sharing in the EU easier by taking policy, legal and investment measures in three main areas:
- Governance of B2G data sharing across the EU: such as putting in place national governance structures, setting up a recognised function (‘data stewards’) in public and private organisations, and exploring the creation of a cross-EU regulatory framework.
- Transparency, citizen engagement and ethics: such as making B2G data sharing more citizen-centric, developing ethical guidelines, and investing in training and education.
- Operational models, structures and technical tools: such as creating incentives for companies to share data, carrying out studies on the benefits of B2G data sharing, and providing support to develop the technical infrastructure through the Horizon Europe and Digital Europe programmes.
They also revised the principles on private sector data sharing in B2G contexts and included new principles on accountability and on fair and ethical data use, which should guide B2G data sharing for the public interest. Examples of successful B2G data sharing partnerships in the EU include an open forest data system in Finland to help manage the ecosystem, mapping of EU fishing activities using ship tracking data, and genome sequencing data of breast cancer patients to identify new personalised treatments. …
The High-Level Expert Group on Business-to-Government Data Sharing was set up in autumn 2018 and includes members from a broad range of interests and sectors. The recommendations presented today in its final report feed into the European strategy for data and can be used as input for other possible future Commission initiatives on Business-to-Government data sharing….(More)”.
The Administrative Conference of the United States: “Artificial intelligence (AI) promises to transform how government agencies do their work. Rapid developments in AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective. Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, their own capacity to learn over time using AI, and whether the use of AI is even permitted.
These are important issues for public debate and academic inquiry. Yet little is known about how agencies are currently using AI systems beyond a few headlinegrabbing examples or surface-level descriptions. Moreover, even amidst growing public and scholarly discussion about how society might regulate government use of AI, little attention has been devoted to how agencies acquire such tools in the first place or oversee their use. In an effort to fill these gaps, the Administrative Conference of the United States (ACUS) commissioned this report from researchers at Stanford University and New York University. The research team included a diverse set of lawyers, law students, computer scientists, and social scientists with the capacity to analyze these cutting-edge issues from technical, legal, and policy angles. The resulting report offers three cuts at federal agency use of AI:
- a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies (Part I)
- a series of in-depth but accessible case studies of specific AI applications at seven leading agencies covering a range of governance tasks (Part II); and
- a set of cross-cutting analyses of the institutional, legal, and policy challenges raised by agency use of AI (Part III)….(More)”
Editorial at Nature: “Everyone’s talking about reproducibility — or at least they are in the biomedical and social sciences. The past decade has seen a growing recognition that results must be independently replicated before they can be accepted as true.
A focus on reproducibility is necessary in the physical sciences, too — an issue explored in this month’s Nature Physics, in which two metrologists argue that reproducibility should be viewed through a different lens. When results in the science of measurement cannot be reproduced, argue Martin Milton and Antonio Possolo, it’s a sign of the scientific method at work — and an opportunity to promote public awareness of the research process (M. J. T. Milton and A. Possolo Nature Phys. 26, 117–119; 2020)….
However, despite numerous experiments spanning three centuries, the precise value of G remains uncertain. The root of the uncertainty is not fully understood: it could be due to undiscovered errors in how the value is being measured; or it could indicate the need for new physics. One scenario being explored is that G could even vary over time, in which case scientists might have to revise their view that it has a fixed value.
If that were to happen — although physicists think it unlikely — it would be a good example of non-reproduced data being subjected to the scientific process: experimental results questioning a long-held theory, or pointing to the existence of another theory altogether.
Questions in biomedicine and in the social sciences do not reduce so cleanly to the determination of a fundamental constant of nature. Compared with metrology, experiments to reproduce results in fields such as cancer biology are likely to include many more sources of variability, which are fiendishly hard to control for.
But metrology reminds us that when researchers attempt to reproduce the results of experiments, they do so using a set of agreed — and highly precise — experimental standards, known in the measurement field as metrological traceability. It is this aspect, the authors contend, that helps to build trust and confidence in the research process….(More)”.
Book edited by Yoshiki Yamagata and Perry P.J. Yang: “…shows how to design, model and monitor smart communities using a distinctive IoT-based urban systems approach. Focusing on the essential dimensions that constitute smart communities energy, transport, urban form, and human comfort, this helpful guide explores how IoT-based sharing platforms can achieve greater community health and well-being based on relationship building, trust, and resilience. Uncovering the achievements of the most recent research on the potential of IoT and big data, this book shows how to identify, structure, measure and monitor multi-dimensional urban sustainability standards and progress.
This thorough book demonstrates how to select a project, which technologies are most cost-effective, and their cost-benefit considerations. The book also illustrates the financial, institutional, policy and technological needs for the successful transition to smart cities, and concludes by discussing both the conventional and innovative regulatory instruments needed for a fast and smooth transition to smart, sustainable communities….(More)”.
Rana Foroohar at the Financial Times: “…A report by a Swedish research group called V-Dem found Taiwan was subject to more disinformation than nearly any other country, much of it coming from mainland China. Yet the popularity of pro-independence politicians is growing there, something Ms Tang views as a circular phenomenon.
When politicians enable more direct participation, the public begins to have more trust in government. Rather than social media creating “a false sense of us versus them,” she notes, decentralised technologies have “enabled a sense of shared reality” in Taiwan.
The same seems to be true in a number of other countries, including Israel, where Green party leader and former Occupy activist Stav Shaffir crowdsourced technology expertise to develop a bespoke data analysis app that allowed her to make previously opaque Treasury data transparent. She’s now heading an OECD transparency group to teach other politicians how to do the same. Part of the power of decentralised technologies is that they allow, at scale, the sort of public input on a wide range of complex issues that would have been impossible in the analogue era.
Consider “quadratic voting”, a concept that has been popularised by economist Glen Weyl, co-author of Radical Markets: Uprooting Capitalism and Democracy for a Just Society. Mr Weyl is the founder of the RadicalxChange movement, which aimsto empower a more participatory democracy. Unlike a binary “yes” or “no” vote for or against one thing, quadratic voting allows a large group of people to use a digital platform to express the strength of their desire on a variety of issues.
For example, when he headed the appropriations committee in the Colorado House of Representatives, Chris Hansen used quadratic voting to help his party quickly sort through how much of their $40m budget should be allocated to more than 100 proposals….(More)”.