Stefaan Verhulst
Public Knowledge: “Today, we’re happy to announce our newest white paper, “The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines,” by Public Knowledge General Counsel Ryan Clough. The paper argues that the rapid and pervasive rise of artificial intelligence risks exploiting the most marginalized and vulnerable in our society. To mitigate these harms, Clough advocates for a new federal authority to help the U.S. government implement fair and equitable AI. Such an authority should provide the rest of the government with the expertise and experience needed to achieve five goals crucial to building ethical AI systems:
- Boosting sector-specific regulators and confronting overarching policy challenges raised by AI;
- Protecting public values in government procurement and implementation of AI;
- Attracting AI practitioners to civil service, and building durable and centralized AI expertise within government;
- Identifying major gaps in the laws and regulatory frameworks that govern AI; and
- Coordinating strategies and priorities for international AI governance.
“Any individual can be misjudged and mistreated by artificial intelligence,” Clough explains, “but the record to date indicates that it is significantly more likely to happen to the less powerful, who also have less recourse to do anything about it.” The paper argues that a new federal authority is the best way to meet the profound and novel challenges AI poses for us all….(More)”.
Book by W. Kip Viscusi: “Like it or not, sometimes we need to put a monetary value on people’s lives. In the past, government agencies used the financial “cost of death” to monetize the mortality risks of regulatory policies, but this method vastly undervalued life. Pricing Lives tells the story of how the government came to adopt an altogether different approach–the value of a statistical life, or VSL—and persuasively shows how its more widespread use could create a safer and more equitable society for everyone.
In the 1980s, W. Kip Viscusi used the method to demonstrate that the benefits of requiring businesses to label hazardous chemicals immensely outweighed the costs. VSL is the risk-reward trade-off that people make about their health when considering risky job choices. With it, Viscusi calculated how much more money workers would demand to take on hazardous jobs, boosting calculated benefits by an order of magnitude. His current estimate of the value of a statistical life is $10 million. In this book, Viscusi provides a comprehensive look at all aspects of economic and policy efforts to price lives, including controversial topics such as whether older people’s lives are worth less and richer people’s lives are worth more. He explains why corporations need to abandon the misguided cost-of-death approach, how the courts can profit from increased application of VSL in assessing liability and setting damages, and how other countries consistently undervalue risks to life.
Pricing Lives proposes sensible economic guideposts to foster more protective policies and greater levels of safety in the United States and throughout the world….(More)”.
Book by In recent years a global network of science has emerged as a result of thousands of individual scientists seeking to collaborate with colleagues around the world, creating a network which rises above national systems. The globalization of science is part of the underlying shift in knowledge creation generally: the collaborative era in science. Over the past decade, the growth in the amount of knowledge and the speed at which it is available has created a fundamental shift—where data, information, and knowledge were once scarce resources, they are now abundantly available.
Collaboration, openness, customer- or problem-focused research and development, altruism, and reciprocity are notable features of abundance, and they create challenges that economists have not yet studied. This book defines the collaborative era, describes how it came to be, reveals its internal dynamics, and demonstrates how real-world practitioners are changing to take advantage of it. Most importantly, the book lays out a guide for policymakers and entrepreneurs as they shift perspectives to take advantage of the collaborative era in order to create social and economic welfare….(More)”.
Paper by Basma Albanna and Richard Heeks: “Positive deviance is a growing approach in international development that identifies those within a population who are outperforming their peers in some way, eg, children in low‐income families who are well nourished when those around them are not. Analysing and then disseminating the behaviours and other factors underpinning positive deviance are demonstrably effective in delivering development results.
However, positive deviance faces a number of challenges that are restricting its diffusion. In this paper, using a systematic literature review, we analyse the current state of positive deviance and the potential for big data to address the challenges facing positive deviance. From this, we evaluate the promise of “big data‐based positive deviance”: This would analyse typical sources of big data in developing countries—mobile phone records, social media, remote sensing data, etc—to identify both positive deviants and the factors underpinning their superior performance.
While big data cannot solve all the challenges facing positive deviance as a development tool, they could reduce time, cost, and effort; identify positive deviants in new or better ways; and enable positive deviance to break out of its current preoccupation with public health into domains such as agriculture, education, and urban planning. In turn, positive deviance could provide a new and systematic basis for extracting real‐world development impacts from big data…(More)”.
Book edited by Allan Afuah, Christopher L. Tucci, and Gianluigi Viscusi: “Examples of the value that can be created and captured through crowdsourcing go back to at least 1714 when the UK used crowdsourcing to solve the Longitude Problem, obtaining a solution that would enable the UK to become the dominant maritime force of its time. Today, Wikipedia uses crowds to provide entries for the world’s largest and free encyclopedia. Partly fueled by the value that can be created and captured through crowdsourcing, interest in researching the phenomenon has been remarkable.
Despite this – or perhaps because of it – research into crowdsourcing has been conducted in different research silos, within the fields of management (from strategy to finance to operations to information systems), biology, communications, computer science, economics, political science, among others. In these silos, crowdsourcing takes names such as broadcast search, innovation tournaments, crowdfunding, community innovation, distributed innovation, collective intelligence, open source, crowdpower, and even open innovation. This book aims to assemble chapters from many of these silos, since the ultimate potential of crowdsourcing research is likely to be attained only by bridging them. Chapters provide a systematic overview of the research on crowdsourcing from different fields based on a more encompassing definition of the concept, its difference for innovation, and its value for both private and public sector….(More)”.
Zeynep Engin and Philip Treleaven in the Computer Journal: “The data science technologies of artificial intelligence (AI), Internet of Things (IoT), big data and behavioral/predictive analytics, and blockchain are poised to revolutionize government and create a new generation of GovTech start-ups. The impact from the ‘smartification’ of public services and the national infrastructure will be much more significant in comparison to any other sector given government’s function and importance to every institution and individual.
Potential GovTech systems include Chatbots and intelligent assistants for public engagement, Robo-advisors to support civil servants, real-time management of the national infrastructure using IoT and blockchain, automated compliance/regulation, public records securely stored in blockchain distributed ledgers, online judicial and dispute resolution systems, and laws/statutes encoded as blockchain smart contracts. Government is potentially the major ‘client’ and also ‘public champion’ for these new data technologies. This review paper uses our simple taxonomy of government services to provide an overview of data science automation being deployed by governments world-wide. The goal of this review paper is to encourage the Computer Science community to engage with government to develop these new systems to transform public services and support the work of civil servants….(More)”.
Book by Micah Altman and Michael P. McDonald: “… unveil the Public Mapping Project, which developed DistrictBuilder, an open-source software redistricting application designed to give the public transparent, accessible, and easy-to-use online mapping tools. As they show, the goal is for all citizens to have access to the same information that legislators use when drawing congressional maps—and use that data to create maps of their own….(More)”.
Bill Curry at The Globe and Mail: “Canadians are increasingly shunning phone surveys, but they could still be providing Statistics Canada with valuable data each time they flush the toilet or flash their debit card.
The national statistics agency laid out an ambitious plan Thursday to overhaul the way it collects and reports on issues ranging from cannabis and opioid use to market-moving information on unemployment and economic growth.
According to four senior Statscan officials, the agency is in the midst of a major transformation as it adapts to a world of big data collected by other government agencies as well as private sector actors such as banks, cellphone companies and digital-based companies like Uber.
At its core, the shift means the agency will become less reliant on traditional phone surveys or having businesses fill out forms to report their sales data. Instead, Statscan is reaching agreements with other government departments and private companies in order to gain access to their raw data, such as point-of-sale information. According to agency officials, such arrangements reduce the reporting paperwork faced by businesses while creating the potential for Statscan to produce faster and more reliable information.
Key releases such as labour statistics or reporting on economic growth could come out sooner, reducing the lag time between the end of a quarter and reporting on results. Officials said economic data that is released quarterly could shift to monthly reporting. The greater access to raw data sources will also allow for more localized reporting at the neighbourhood level….(More)”.
Paper by Mila Gasco-Hernandez and Jose Ramon Gil-Garcia: “Previous studies have infrequently addressed the dynamic interactions among social, technical, and organizational variables in open government data initiatives. In addition, organization level models have neglected to explain the role of management in decision-making processes about technology and data. This article contributes to addressing this gap in the literature by analyzing the complex relationships between open government data characteristics and the organizations and institutions in which they are embedded.
We systematically compare the open data inception and implementation processes, as well as their main results, in three Spanish local governments (Gava and Rubi in Catalonia and Gijon in Asturias) by using a model that combines the technology enactment framework with some specific constructs and relationships from the process model of computing change. Our resulting model is able to identify and explain the significant role of management in shaping and mediating different interactions, but also acknowledges the importance of organizational level variables and the context in which the open data initiative is taking place…(More)”.