Vincent Duclos in Medicine Anthropology Theory: “In the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization.Vincent DuclosIn the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated. These systems integrate crowdsourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns. This article examines some of the conceptual and practical challenges raised by the online algorithmic tracking of disease by focusing on the case of Google Flu Trends (GFT). Launched in 2008, GFT was Google’s flagship syndromic surveillance system, specializing in ‘real-time’ tracking of outbreaks of influenza. GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015. This paper focuses on GFT’s shortcomings, which were particularly severe during flu epidemics, when GFT struggled to make sense of the unexpected surges in the number of search queries. I suggest two reasons for GFT’s difficulties. First, it failed to keep track of the dynamics of contagion, at once biological and digital, as it affected what I call here the ‘googling crowds’. Search behavior during epidemics in part stems from a sort of viral anxiety not easily amenable to algorithmic anticipation, to the extent that the algorithm’s predictive capacity remains dependent on past data and patterns. Second, I suggest that GFT’s troubles were the result of how it collected data and performed what I call ‘epidemic reality’. GFT’s data became severed from the processes Google aimed to track, and the data took on a life of their own: a trackable life, in which there was little flu left. The story of GFT, I suggest, offers insight into contemporary tensions between the indomitable intensity of collective life and stubborn attempts at its algorithmic formalization….(More)”.
Innovation Partnerships: An effective but under-used tool for buying innovation
Claire Gamage at Challenging Procurement: “…in an era where demand for public sector services increases as budgets decrease, the public sector should start to consider alternative routes to procurement. …
What is the Innovation Partnership procedure?
In a nutshell, it is essentially a procurement process combined with an R&D contract. Authorities are then able to purchase the ‘end result’ of the R&D exercise, without having to undergo a new procurement procedure. Authorities may choose to appoint a number of partners to participate in the R&D phase, but may subsequently only purchase one/some of those solutions.
Why does this procedure result in more innovative solutions?
The procedure was designed to drive innovation. Indeed, it may only be used in circumstances where a solution is not already available on the open market. Therefore, participants in the Innovation Partnership will be asked to create something which does not already exist and should be tailored towards solving a particular problem or ‘challenge’ set by the authority.
This procedure may also be particularly attractive to SMEs/start-ups, who often find it easier to innovate in comparison with their larger competitors and therefore the purchasing authority is perhaps likely to obtain a more innovative product or service.
One of the key advantages of an Innovation Partnership is that the R&D phase is separate to the subsequent purchase of the solution. In other words, the authority is not (usually) under any obligation to purchase the ‘end result’ of the R&D exercise, but has the option to do so if it wishes. Therefore, it may be easier to discourage internal stakeholders from imposing selection criteria which inadvertently exclude SMEs/start-ups (e.g. minimum turnover requirements, parent company guarantees etc.), as the authority is not committed to actually purchasing at the end of the procurement process which will select the innovation partner(s)….(More)”.
Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices

New report by Stefaan Verhulst, Andrew Young, Michelle Winowatan. and Andrew J. Zahuranec: “To address the challenges of our times, we need both new solutions and new ways to develop those solutions. The responsible use of data will be key toward that end. Since pioneering the concept of “data collaboratives” in 2015, The GovLab has studied and experimented with innovative ways to leverage private-sector data to tackle various societal challenges, such as urban mobility, public health, and climate change.
While we have seen an uptake in normative discussions on how data should be shared, little analysis exists of the actual practice. This paper seeks to address that gap and seeks to answer the following question: What are the variables and models that determine functional access to private sector data for public good? In Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices, we describe the emerging universe of data collaboratives and develop a typology of six practice areas. Our goal is to provide insight into current applications to accelerate the creation of new data collaboratives. The report outlines dozens of examples, as well as a set of recommendations to enable more systematic, sustainable, and responsible data collaboration….(More)”
User Data as Public Resource: Implications for Social Media Regulation
Paper by Philip Napoli: “Revelations about the misuse and insecurity of user data gathered by social media platforms have renewed discussions about how best to characterize property rights in user data. At the same time, revelations about the use of social media platforms to disseminate disinformation and hate speech have prompted debates over the need for government regulation to assure that these platforms serve the public interest. These debates often hinge on whether any of the established rationales for media regulation apply to social media. This article argues that the public resource rationale that has been utilized in traditional media regulation in the United States applies to social media.
The public resource rationale contends that, when a media outlet utilizes a public resource—such as the broadcast spectrum, or public rights of way—the outlet must abide by certain public interest obligations that may infringe upon its First Amendment rights. This article argues that aggregate user data can be conceptualized as a public resource that triggers the application of a public interest regulatory framework to social media sites and other digital platforms that derive their revenue from the gathering, sharing, and monetization of massive aggregations of user data….(More)”.
Internet of Water
About: “Water is the essence of life and vital to the well-being of every person, economy, and ecosystem on the planet. But around the globe and here in the United States, water challenges are mounting as climate change, population growth, and other drivers of water stress increase. Many of these challenges are regional in scope and larger than any one organization (or even states), such as the depletion of multi-state aquifers, basin-scale flooding, or the wide-spread accumulation of nutrients leading to dead zones. Much of the infrastructure built to address these problems decades ago, including our data infrastructure, are struggling to meet these challenges. Much of our water data exists in paper formats unique to the organization collecting the data. Often, these organizations existed long before the personal computer was created (1975) or the internet became mainstream (mid 1990’s). As organizations adopted data infrastructure in the late 1990’s, it was with the mindset of “normal infrastructure” at the time. It was built to last for decades, rather than adapt with rapid technological changes.
New water data infrastructure with new technologies that enable data to flow seamlessly between users and generate information for real-time management are needed to meet our growing water challenges. Decision-makers need accurate, timely data to understand current conditions, identify sustainability problems, illuminate possible solutions, track progress, and adapt along the way. Stakeholders need easy-to-understand metrics of water conditions so they can make sure managers and policymakers protect the environment and the public’s water supplies. The water community needs to continually improve how they manage this complex resource by using data and communicating information to support decision-making. In short, a sustained effort is required to accelerate the development of open data and information systems to support sustainable water resources management. The Internet of Water (IoW) is designed to be just such an effort….(More)”.
To What Extent Does the EU General Data Protection Regulation (GDPR) Apply to Citizen Scientist-led Health Research with Mobile Devices?
Article by Edward Dove and Jiahong Chen: “In this article, we consider the possible application of the European General Data Protection Regulation (GDPR) to “citizen scientist”-led health research with mobile devices. We argue that the GDPR likely does cover this activity, depending on the specific context and the territorial scope. Remaining open questions that result from our analysis lead us to call for a lex specialis that would provide greater clarity and certainty regarding the processing of health data for research purposes, including by these non-traditional researchers…(More)”.
Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling
Paper by Michele Samorani et al: “Machine learning is often employed in appointment scheduling to identify the patients with the greatest no-show risk, so as to schedule them into overbooked slots, and thereby maximize the clinic performance, as measured by a weighted sum of all patients’ waiting time and the provider’s overtime and idle time. However, if the patients with the greatest no-show risk belong to the same demographic group, then that demographic group will be scheduled in overbooked slots disproportionately to the general population. This is problematic because patients scheduled in those slots tend to have a worse service experience than the other patients, as measured by the time they spend in the waiting room. Such negative experience may decrease patient’s engagement and, in turn, further increase no-shows. Motivated by the real-world case of a large specialty clinic whose black patients have a higher no-show probability than non-black patients, we demonstrate that combining machine learning with scheduling optimization causes racial disparity in terms of patient waiting time. Our solution to eliminate this disparity while maintaining the benefits derived from machine learning consists of explicitly including the objective of minimizing racial disparity. We validate our solution method both on simulated data and real-world data, and find that racial disparity can be completely eliminated with no significant increase in scheduling cost when compared to the traditional predictive overbooking framework….(More)”.
Communal Intelligence
A Talk By Seth Lloyd at The Edge: “We haven’t talked about the socialization of intelligence very much. We talked a lot about intelligence as being individual human things, yet the thing that distinguishes humans from other animals is our possession of human language, which allows us both to think and communicate in ways that other animals don’t appear to be able to. This gives us a cooperative power as a global organism, which is causing lots of trouble. If I were another species, I’d be pretty damn pissed off right now. What makes human beings effective is not their individual intelligences, though there are many very intelligent people in this room, but their communal intelligence….(More)”.
Handbook of Research on Politics in the Computer Age
Book edited by Ashu M. G. Solo: “Technology and particularly the Internet have caused many changes in the realm of politics. Aspects of engineering, computer science, mathematics, or natural science can be applied to politics. Politicians and candidates use their own websites and social network profiles to get their message out. Revolutions in many countries in the Middle East and North Africa have started in large part due to social networking websites such as Facebook and Twitter. Social networking has also played a role in protests and riots in numerous countries. The mainstream media no longer has a monopoly on political commentary as anybody can set up a blog or post a video online. Now, political activists can network together online.
The Handbook of Research on Politics in the Computer Age is a pivotal reference source that serves to increase the understanding of methods for politics in the computer age, the effectiveness of these methods, and tools for analyzing these methods. The book includes research chapters on different aspects of politics with information technology, engineering, computer science, or math, from 27 researchers at 20 universities and research organizations in Belgium, Brazil, Cape Verde, Egypt, Finland, France, Hungary, Italy, Mexico, Nigeria, Norway, Portugal, and the United States of America. Highlighting topics such as online campaigning and fake news, the prospective audience includes, but is not limited to, researchers, political and public policy analysts, political scientists, engineers, computer scientists, political campaign managers and staff, politicians and their staff, political operatives, professors, students, and individuals working in the fields of politics, e-politics, e-government, new media and communication studies, and Internet marketing….(More)”.
Artificial Discretion as a Tool of Governance: A Framework for Understanding the Impact of Artificial Intelligence on Public Administration
Paper by Matthew M Young, Justin B Bullock, and Jesse D Lecy in Perspectives on Public Management and Governance: “Public administration research has documented a shift in the locus of discretion away from street-level bureaucrats to “systems-level bureaucracies” as a result of new information communication technologies that automate bureaucratic processes, and thus shape access to resources and decisions around enforcement and punishment. Advances in artificial intelligence (AI) are accelerating these trends, potentially altering discretion in public management in exciting and in challenging ways. We introduce the concept of “artificial discretion” as a theoretical framework to help public managers consider the impact of AI as they face decisions about whether and how to implement it. We operationalize discretion as the execution of tasks that require nontrivial decisions. Using Salamon’s tools of governance framework, we compare artificial discretion to human discretion as task specificity and environmental complexity vary. We evaluate artificial discretion with the criteria of effectiveness, efficiency, equity, manageability, and political feasibility. Our analysis suggests three principal ways that artificial discretion can improve administrative discretion at the task level: (1) increasing scalability, (2) decreasing cost, and (3) improving quality. At the same time, artificial discretion raises serious concerns with respect to equity, manageability, and political feasibility….(More)”.