User Data as Public Resource: Implications for Social Media Regulation


Paper by Philip Napoli: “Revelations about the misuse and insecurity of user data gathered by social media platforms have renewed discussions about how best to characterize property rights in user data. At the same time, revelations about the use of social media platforms to disseminate disinformation and hate speech have prompted debates over the need for government regulation to assure that these platforms serve the public interest. These debates often hinge on whether any of the established rationales for media regulation apply to social media. This article argues that the public resource rationale that has been utilized in traditional media regulation in the United States applies to social media.

The public resource rationale contends that, when a media outlet utilizes a public resource—such as the broadcast spectrum, or public rights of way—the outlet must abide by certain public interest obligations that may infringe upon its First Amendment rights. This article argues that aggregate user data can be conceptualized as a public resource that triggers the application of a public interest regulatory framework to social media sites and other digital platforms that derive their revenue from the gathering, sharing, and monetization of massive aggregations of user data….(More)”.

Internet of Water


About: “Water is the essence of life and vital to the well-being of every person, economy, and ecosystem on the planet. But around the globe and here in the United States, water challenges are mounting as climate change, population growth, and other drivers of water stress increase. Many of these challenges are regional in scope and larger than any one organization (or even states), such as the depletion of multi-state aquifers, basin-scale flooding, or the wide-spread accumulation of nutrients leading to dead zones. Much of the infrastructure built to address these problems decades ago, including our data infrastructure, are struggling to meet these challenges. Much of our water data exists in paper formats unique to the organization collecting the data. Often, these organizations existed long before the personal computer was created (1975) or the internet became mainstream (mid 1990’s). As organizations adopted data infrastructure in the late 1990’s, it was with the mindset of “normal infrastructure” at the time. It was built to last for decades, rather than adapt with rapid technological changes. 

New water data infrastructure with new technologies that enable data to flow seamlessly between users and generate information for real-time management are needed to meet our growing water challenges. Decision-makers need accurate, timely data to understand current conditions, identify sustainability problems, illuminate possible solutions, track progress, and adapt along the way. Stakeholders need easy-to-understand metrics of water conditions so they can make sure managers and policymakers protect the environment and the public’s water supplies. The water community needs to continually improve how they manage this complex resource by using data and communicating information to support decision-making. In short, a sustained effort is required to accelerate the development of open data and information systems to support sustainable water resources management. The Internet of Water (IoW) is designed to be just such an effort….(More)”.

Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling


Paper by Michele Samorani et al: “Machine learning is often employed in appointment scheduling to identify the patients with the greatest no-show risk, so as to schedule them into overbooked slots, and thereby maximize the clinic performance, as measured by a weighted sum of all patients’ waiting time and the provider’s overtime and idle time. However, if the patients with the greatest no-show risk belong to the same demographic group, then that demographic group will be scheduled in overbooked slots disproportionately to the general population. This is problematic because patients scheduled in those slots tend to have a worse service experience than the other patients, as measured by the time they spend in the waiting room. Such negative experience may decrease patient’s engagement and, in turn, further increase no-shows. Motivated by the real-world case of a large specialty clinic whose black patients have a higher no-show probability than non-black patients, we demonstrate that combining machine learning with scheduling optimization causes racial disparity in terms of patient waiting time. Our solution to eliminate this disparity while maintaining the benefits derived from machine learning consists of explicitly including the objective of minimizing racial disparity. We validate our solution method both on simulated data and real-world data, and find that racial disparity can be completely eliminated with no significant increase in scheduling cost when compared to the traditional predictive overbooking framework….(More)”.

Handbook of Research on Politics in the Computer Age


Book edited by Ashu M. G. Solo: “Technology and particularly the Internet have caused many changes in the realm of politics. Aspects of engineering, computer science, mathematics, or natural science can be applied to politics. Politicians and candidates use their own websites and social network profiles to get their message out. Revolutions in many countries in the Middle East and North Africa have started in large part due to social networking websites such as Facebook and Twitter. Social networking has also played a role in protests and riots in numerous countries. The mainstream media no longer has a monopoly on political commentary as anybody can set up a blog or post a video online. Now, political activists can network together online.

The Handbook of Research on Politics in the Computer Age is a pivotal reference source that serves to increase the understanding of methods for politics in the computer age, the effectiveness of these methods, and tools for analyzing these methods. The book includes research chapters on different aspects of politics with information technology, engineering, computer science, or math, from 27 researchers at 20 universities and research organizations in Belgium, Brazil, Cape Verde, Egypt, Finland, France, Hungary, Italy, Mexico, Nigeria, Norway, Portugal, and the United States of America. Highlighting topics such as online campaigning and fake news, the prospective audience includes, but is not limited to, researchers, political and public policy analysts, political scientists, engineers, computer scientists, political campaign managers and staff, politicians and their staff, political operatives, professors, students, and individuals working in the fields of politics, e-politics, e-government, new media and communication studies, and Internet marketing….(More)”.

Artificial Discretion as a Tool of Governance: A Framework for Understanding the Impact of Artificial Intelligence on Public Administration


Paper by Matthew M Young, Justin B Bullock, and Jesse D Lecy in Perspectives on Public Management and Governance: “Public administration research has documented a shift in the locus of discretion away from street-level bureaucrats to “systems-level bureaucracies” as a result of new information communication technologies that automate bureaucratic processes, and thus shape access to resources and decisions around enforcement and punishment. Advances in artificial intelligence (AI) are accelerating these trends, potentially altering discretion in public management in exciting and in challenging ways. We introduce the concept of “artificial discretion” as a theoretical framework to help public managers consider the impact of AI as they face decisions about whether and how to implement it. We operationalize discretion as the execution of tasks that require nontrivial decisions. Using Salamon’s tools of governance framework, we compare artificial discretion to human discretion as task specificity and environmental complexity vary. We evaluate artificial discretion with the criteria of effectiveness, efficiency, equity, manageability, and political feasibility. Our analysis suggests three principal ways that artificial discretion can improve administrative discretion at the task level: (1) increasing scalability, (2) decreasing cost, and (3) improving quality. At the same time, artificial discretion raises serious concerns with respect to equity, manageability, and political feasibility….(More)”.

Benefits of Open Data in Public Health


Paper by P. Huston, VL. Edge and E. Bernier: “Open Data is part of a broad global movement that is not only advancing science and scientific communication but also transforming modern society and how decisions are made. What began with a call for Open Science and the rise of online journals has extended to Open Data, based on the premise that if reports on data are open, then the generated or supporting data should be open as well. There have been a number of advances in Open Data over the last decade, spearheaded largely by governments. A real benefit of Open Data is not simply that single databases can be used more widely; it is that these data can also be leveraged, shared and combined with other data. Open Data facilitates scientific collaboration, enriches research and advances analytical capacity to inform decisions. In the human and environmental health realms, for example, the ability to access and combine diverse data can advance early signal detection, improve analysis and evaluation, inform program and policy development, increase capacity for public participation, enable transparency and improve accountability. However, challenges remain. Enormous resources are needed to make the technological shift to open and interoperable databases accessible with common protocols and terminology. Amongst data generators and users, this shift also involves a cultural change: from regarding databases as restricted intellectual property, to considering data as a common good. There is a need to address legal and ethical considerations in making this shift. Finally, along with efforts to modify infrastructure and address the cultural, legal and ethical issues, it is important to share the information equitably and effectively. While there is great potential of the open, timely, equitable and straightforward sharing of data, fully realizing the myriad of benefits of Open Data will depend on how effectively these challenges are addressed….(More)”.

Dissecting racial bias in an algorithm used to manage the health of populations


Paper by Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan in Science: “Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts….(More)”.

Waze launches data-sharing integration for cities with Google Cloud


Ryan Johnston at StateScoop: “Thousands of cities across the world that rely on externally-sourced traffic data from Waze, the route-finding mobile app, will now have access to the data through the Google Cloud suite of analytics tools instead of a raw feed, making it easier for city transportation and planning officials to reach data-driven decisions. 

Waze said Tuesday that the anonymized data is now available through Google Cloud, with the goal of making curbside management, roadway maintenance and transit investment easier for small to midsize cities that don’t have the resources to invest in enterprise data-analytics platforms of their own. Since 2014, Waze — which became a Google subsidiary in 2013 — has submitted traffic data to its partner cities through its “Waze for Cities” program, but those data sets arrived in raw feeds without any built-in analysis or insights.

While some cities have built their own analysis tools to understand the free data from the company, others have struggled to stay afloat in the sea of data, said Dani Simons, Waze’s head of public sector partnerships.

“[What] we’ve realized is providing the data itself isn’t enough for our city partners or for a lot of our city and state partners,” Simons said. “We have been asked over time for better ways to analyze and turn that raw data into something more actionable for our public partners, and that’s why we’re doing this.”

The data will now arrive automatically integrated with Google’s free data analysis tool, BigQuery, and a visualization tool, Data Studio. Cities can use the tools to analyze up to a terabyte of data and store up to 10 gigabytes a month for free, but they can also choose to continue to use in-house analysis tools, Simons said. 

The integration was also designed with input from Waze’s top partner cities, including Los Angeles; Seattle; and San Jose, California. One of Waze’s private sector partners, Genesis Pulse, which designs software for emergency responders, reported that Waze users identified 40 percent of roadside accidents an average of 4.5 minutes before those incidents were reported to 911 or public safety.

The integration is Waze’s attempt at solving two of the biggest data problems that cities have today, Simons told StateScoop. For some cities in the U.S., Waze is one of the several private companies sharing transit data with them. Other cities are drowning in data from traffic sensors, city-owned fleets data or private mobility companies….(More)”.

From Transactions Data to Economic Statistics: Constructing Real-Time, High-Frequency, Geographic Measures of Consumer Spending


Paper by Aditya Aladangady et al: “Access to timely information on consumer spending is important to economic policymakers. The Census Bureau’s monthly retail trade survey is a primary source for monitoring consumer spending nationally, but it is not well suited to study localized or short-lived economic shocks. Moreover, lags in the publication of the Census estimates and subsequent, sometimes large, revisions diminish its usefulness for real-time analysis. Expanding the Census survey to include higher frequencies and subnational detail would be costly and would add substantially to respondent burden. We take an alternative approach to fill these information gaps. Using anonymized transactions data from a large electronic payments technology company, we create daily estimates of retail spending at detailed geographies. Our daily estimates are available only a few days after the transactions occur, and the historical time series are available from 2010 to the present. When aggregated to the national leve l, the pattern of monthly growth rates is similar to the official Census statistics. We discuss two applications of these new data for economic analysis: First, we describe how our monthly spending estimates are useful for real-time monitoring of aggregate spending, especially during the government shutdown in 2019, when Census data were delayed and concerns about the economy spiked. Second, we show how the geographic detail allowed us quantify in real time the spending effects of Hurricanes Harvey and Irma in 2017….(More)”.

Toolkit to Help Community Leaders Drive Sustainable, Inclusive Growth


The Mastercard Center for Inclusive Growth: “… is unveiling a groundbreaking suite of tools that will provide local leaders with timely data-driven insights on the current state of and potential for inclusive growth in their communities. The announcement comes as private and public sector leaders gather in Washington for the inaugural Global Inclusive Growth Summit.

For the first time the new Inclusive Growth Toolkit brings together a clear, simple view of social and economic growth in underserved communities across the U.S., at the census-tract level. This was created in response to growing demand from community leaders for more evidence-based insights, to help them steer impact investment dollars to locally-led economic development initiatives, unlock the potential of neighborhoods, and improve quality of life for all.    

The initial design of the toolkit is focused on driving sustainable growth for the 37+ million people living in the 8700+ QOZs throughout the United States. This comprehensive picture reveals that neighborhoods can look very different and may require different types of interventions to achieve successful and sustainable growth.

The Inclusive Growth Toolkit includes:

  • The Inclusive Growth Score – an interactive online map where users can view measures of inclusion and growth and then download a PDF Scorecard for any of the QOZs at census tract level.

A deep-dive analytics consultancy service that provides community leaders with customized insights to inform policy decisions, prospectus development, and impact investor discussions….(More)”.