Stefaan Verhulst
Book by Dariusz Jemielniak and Aleksandra Przegalinska: “Humans are hard-wired for collaboration, and new technologies of communication act as a super-amplifier of our natural collaborative mindset. This volume in the MIT Press Essential Knowledge series examines the emergence of a new kind of social collaboration enabled by networked technologies. This new collaborative society might be characterized as a series of services and startups that enable peer-to-peer exchanges and interactions though technology. Some believe that the economic aspects of the new collaboration have the potential to make society more equitable; others see collaborative communities based on sharing as a cover for social injustice and user exploitation.
The book covers the “sharing economy,” and the hijacking of the term by corporations; different models of peer production, and motivations to participate; collaborative media production and consumption, the definitions of “amateur” and “professional,” and the power of memes; hactivism and social movements, including Anonymous and anti-ACTA protest; collaborative knowledge creation, including citizen science; collaborative self-tracking; and internet-mediated social relations, as seen in the use of Instagram, Snapchat, and Tinder. Finally, the book considers the future of these collaborative tendencies and the disruptions caused by fake news, bots, and other challenges….(More)”.
Book by Philip N. Howard: “Artificially intelligent “bot” accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these “lie machines” apart.
Lie Machines is full of riveting behind-the-scenes stories from the world’s biggest and most damagingly successful misinformation initiatives—including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and explains how to shut them down…(More)”.
Paper by Jin Wang et al: “Data sharing plays a fundamental role in providing data resources for geographic modeling and simulation. Although there are many successful cases of data sharing through the web, current practices for sharing data mostly focus on data publication using metadata at the file level, which requires identifying, restructuring and synthesizing raw data files for further usage. In hydrology, because the same hydrological information is often stored in data files with different formats, modelers should identify the required information from multisource data sets and then customize data requirements for their applications. However, these data customization tasks are difficult to repeat, which leads to repetitive labor. This paper presents a data sharing method that provides a solution for data manipulation based on a structured data description model rather than raw data files. With the structured data description model, multisource hydrological data can be accessed and processed in a unified way and published as data services using a designed data server. This study also proposes a data configuration manager to customize data requirements through an interactive programming tool, which can help in using the data services. In addition, a component-based data viewer is developed for the visualization of multisource data in a sharable visualization scheme. A case study that involves sharing and applying hydrological data is designed to examine the applicability and feasibility of the proposed data sharing method….(More)”.
Paper by Enrique Estellés-Arolas: “Neighbours sharing information about robberies in their district through social networking platforms, citizens and volunteers posting about the irregularities of political elections on the Internet, and internauts trying to identify a suspect of a crime: in all these situations, people who share different degrees of relationship collaborate through the Internet and other technologies to try to help with or solve an offence. T
he crowd, which is sometimes seen as a threat, in these cases becomes an invaluable resource that can complement law enforcement through collective intelligence. Owing to the increasing growth of such initiatives, this article conducts a systematic review of the literature to identify the elements that characterize them and to find the conditions that make them work successfully….(More)”.
Paper by Holger Straßheim: “Some argue that the global rise of behavioral approaches challenges the rationalist tradition in public policy. Others fear that it could undermine deliberation and public reasoning. This paper focuses on the worldwide rise and spread of behavioral expertise and behavioral public policy. It provides a general insight in terms of the role of expertise, the science-policy nexus and the distribution of epistemic competences in public policy. Based on an extensive literature review, the emergence and consequences of behavioral-expert networks are assessed. It will be suggested that it is necessary to break free from the microfocus proposed by behavioral public policy and to pay more attention to the institutional and cultural constellations of knowledge- and decision-making in democracies….(More)”.
Book edited by James L. Perry: “Two big ideas serve as the catalyst for the essays collected in this book. The first is the state of governance in the United States, which Americans variously perceive as broken, frustrating, and unresponsive. Editor James Perry observes in his Introduction that this perception is rooted in three simultaneous developments: government’s failure to perform basic tasks that once were taken for granted, an accelerating pace of change that quickly makes past standards of performance antiquated, and a dearth of intellectual capital that generate the capacity to bridge the gulf between expectations and performance. The second idea hearkens back to the Progressive era, when Americans revealed themselves to be committed to better administration of their government at all levels—federal, state, and local.
These two ideas—the diminishing capacity for effective governance and Americans’ expectations for reform—are veering in opposite directions. Contributors to Public Service and Good Governance for the Twenty-First Century explore these central ideas by addressing such questions as: what is the state of government today? Can future disruptions of governance and public service be anticipated? What forms of government will emerge from the past and what institutions and structures will be needed to meet future challenges? And lastly, and perhaps most importantly, what knowledge, skills, and abilities will need to be fostered for tomorrow’s civil servants to lead and execute effectively?
Public Service and Good Governance for the Twenty-First Century offers recommendations for bending the trajectories of governance capacity and reform expectations toward convergence, including reversing the trend of administrative disinvestment, developing talent for public leadership through higher education, creating a federal civil service to meet future needs, and rebuilding bipartisanship so that the sweeping changes needed to restore good government become possible….(More)”
Paper by Mary A. Majumder and Amy L. McGuire: “As citizen science expands, questions arise regarding the applicability of norms and policies created in the context of conventional science. This article focuses on data sharing in the conduct of health-related citizen science, asking whether citizen scientists have obligations to share data and publish findings on par with the obligations of professional scientists. We conclude that there are good reasons for supporting citizen scientists in sharing data and publishing findings, and we applaud recent efforts to facilitate data sharing. At the same time, we believe it is problematic to treat data sharing and publication as ethical requirements for citizen scientists, especially where there is the potential for burden and harm without compensating benefit…(More)”.
Blog by Amen Ra Mashariki: “Governments should protect the data and privacy rights of their communities even during emergencies. It is a false trade-off to require more data without protection. We can and should do both — collect the appropriate data and protect it. Establishing and protecting the data rights and privacy of our communities’ underserved, underrepresented, disabled, and vulnerable residents is the only way we can combat the negative impact of COVID-19 or any other crisis.
Building trust is critical. Governments can strengthen data privacy protocols, beef up transparency mechanisms, and protect the public’s data rights in the name of building trust — especially with the most vulnerable populations. Otherwise, residents will opt out of engaging with government, and without their information, leaders like first responders will be blind to their existence when making decisions and responding to emergencies, as we are seeing with COVID-19.
As Chief Analytics Officer of New York City, I often remembered the words of Defense Secretary Donald Rumsfeld, especially with regards to using data during emergencies, that there are “known knowns, known unknowns, and unknown unknowns, and we will always get hurt by the unknown unknowns.” Meaning the things we didn’t know — the data that we didn’t have — was always going to be what hurt us during times of emergencies….
There are three key steps that governments can do right now to use data most effectively to respond to emergencies — both for COVID-19 and in the future.
Seek Open Data First
In times of crisis and emergencies, many believe that government and private entities, either purposefully or inadvertently, are willing to trample on the data rights of the public in the name of appropriate crisis response. This should not be a trade-off. We can respond to crises while keeping data privacy and data rights in the forefront of our minds. Rather than dismissing data rights, governments can start using data that is already openly available. This seems like a simple step, but it does two very important things. First, it forces you to understand the data that is already available in your jurisdiction. Second, it grows your ability to fill the gaps with respect to what you know about the city by looking outside of city government. …(More)”.
Report by the Ada Lovelace Institute and DataKind UK: “As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.
This report is primarily aimed at policymakers, to inform more accurate and focused policy conversations. It may also be helpful to anyone who creates, commissions or interacts with an algorithmic system and wants to know what methods or approaches exist to assess and evaluate that system…
Clarifying terms and approaches
Through literature review and conversations with experts from a range of disciplines, we’ve identified four prominent approaches to assessing algorithms that are often referred to by just two terms: algorithm audit and algorithmic impact assessment. But there is not always agreement on what these terms mean among different communities: social scientists, computer scientists, policymakers and the general public have different interpretations and frames of reference.
While there is broad enthusiasm among policymakers for algorithm audits and impact assessments, there is often lack of detail about the approaches being discussed. This stems both from the confusion of terms, but also from the different maturity of the approaches the terms describe.
Clarifying which approach we’re referring to, as well as where further research is needed, will help policymakers and practitioners to do the more vital work of building evidence and methodology to take these approaches forward.
We focus on algorithm audit and algorithmic impact assessment. For each, we identify two key approaches the terms can be interpreted as:
- Algorithm audit
- Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias
- Regulatory inspection: a broad approach, focused on an algorithmic system’s compliance with regulation or norms, necessitating a number of different tools and methods; typically performed by regulators or auditing professionals
- Algorithmic impact assessment
- Algorithmic risk assessment: assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised)
- Algorithmic impact evaluation: assessing possible societal impacts of an algorithmic system on the users or population it affects after it is in use…(More)”.
.
Book edited by Simon Deakin and Christopher Markou: “What does computable law mean for the autonomy, authority, and legitimacy of the legal system? Are we witnessing a shift from Rule of Law to a new Rule of Technology? Should we even build these things in the first place?
This unique volume collects original papers by a group of leading international scholars to address some of the fascinating questions raised by the encroachment of Artificial Intelligence (AI) into more aspects of legal process, administration, and culture. Weighing near-term benefits against the longer-term, and potentially path-dependent, implications of replacing human legal authority with computational systems, this volume pushes back against the more uncritical accounts of AI in law and the eagerness of scholars, governments, and LegalTech developers, to overlook the more fundamental – and perhaps ‘bigger picture’ – ramifications of computable law…(More)”