How Data Can Help in the Fight Against the Opioid Epidemic in the United States


Report by Joshua New: “The United States is in the midst of an opioid epidemic 20 years in the making….

One of the most pernicious obstacles in the fight against the opioid epidemic is that, until relatively recently, it was difficult to measure the epidemic in any comprehensive capacity beyond such high-level statistics. A lack of granular data and authorities’ inability to use data to inform response efforts allowed the epidemic to grow to devastating proportions. The maxim “you can’t manage what you can’t measure” has never been so relevant, and this failure to effectively leverage data has undoubtedly cost many lives and caused severe social and economic damage to communities ravaged by opioid addiction, with authorities limited in their ability to fight back.

Many factors contributed to the opioid epidemic, including healthcare providers not fully understanding the potential ramifications of prescribing opioids, socioeconomic conditions that make addiction more likely, and drug distributors turning a blind eye to likely criminal behavior, such as pharmacy workers illegally selling opioids on the black market. Data will not be able to solve these problems, but it can make public health officials and other stakeholders more effective at responding to them. Fortunately, recent efforts to better leverage data in the fight against the opioid epidemic have demonstrated the potential for data to be an invaluable and effective tool to inform decision-making and guide response efforts. Policymakers should aggressively pursue more data-driven strategies to combat the opioid epidemic while learning from past mistakes that helped contribute to the epidemic to prevent similar situations in the future.

The scope of this paper is limited to opportunities to better leverage data to help address problems primarily related to the abuse of prescription opioids, rather than the abuse of illicitly manufactured opioids such as heroin and fentanyl. While these issues may overlap, such as when a person develops an opioid use disorder from prescribed opioids and then seeks heroin when they are unable to obtain more from their doctor, the opportunities to address the abuse of prescription opioids are more clear-cut….(More)”.

Finland’s model in utilising forest data


Report by Matti Valonen et al: “The aim of this study is to depict the Finnish Forest Centre’s Metsään.fiwebsite’s background, objectives and implementation and to assess its needs for development and future prospects. The Metsään.fi-service included in the Metsään.fi-website is a free e-service for forest owners and corporate actors (companies, associations and service providers) in the forest sector, which aim is to support active decision-making among forest owners by offering forest resource data and maps on forest properties, by making contacts with the authorities easier through online services and to act as a platform for offering forest services, among other things.

In addition to the Metsään.fi-service, the website includes open forest data services that offer the users national forest resource data that is not linked with personal information.

Private forests are in a key position as raw material sources for traditional and new forest-based bioeconomy. In addition to wood material, the forests produce non-timber forest products (for example berries and mushrooms), opportunities for recreation and other ecosystem services.

Private forests cover roughly 60 percent of forest land, but about 80 percent of the domestic wood used by forest industry. In 2017 the value of the forest industry production was 21 billion euros, which is a fifth of the entire industry production value in Finland. The forest industry export in 2017 was worth about 12 billion euros, which covers a fifth of the entire export of goods. Therefore, the forest sector is important for Finland’s national economy…(More)”.

Citizen Engagement in Energy Efficiency Retrofit of Public Housing Buildings: A Lisbon Case Study


Paper by Catarina Rolim and Ricardo Gomes: “In Portugal, there are about 120 thousand social housing and a large share of them are in need of some kind of rehabilitation. Alongside the technical challenge associated with the retrofit measures implementation, there is the challenge of involving the citizens in adopting more energy conscious behaviors. Within the Sharing Cities project and, specifically in the case of social housing retrofit, engagement activities with the tenants are being promoted, along with participation from city representatives, decision makers, stakeholders, and among others. This paper will present a methodology outlined to evaluate the impact of retrofit measures considering the citizen as a crucial retrofit stakeholder. The approach ranges from technical analysis and data monitoring but also conveys activities such as educational and training sessions, interviews, surveys, workshops, public events, and focus groups. These will be conducted during the different stages of project implementation; the definition process, during deployment and beyond deployment of solutions….(More)”.

Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices


New report by Stefaan Verhulst, Andrew Young, Michelle Winowatan. and Andrew J. Zahuranec: “To address the challenges of our times, we need both new solutions and new ways to develop those solutions. The responsible use of data will be key toward that end. Since pioneering the concept of “data collaboratives” in 2015, The GovLab has studied and experimented with innovative ways to leverage private-sector data to tackle various societal challenges, such as urban mobility, public health, and climate change.

While we have seen an uptake in normative discussions on how data should be shared, little analysis exists of the actual practice. This paper seeks to address that gap and seeks to answer the following question: What are the variables and models that determine functional access to private sector data for public good? In Leveraging Private Data for Public Good: A Descriptive Analysis and Typology of Existing Practices, we describe the emerging universe of data collaboratives and develop a typology of six practice areas. Our goal is to provide insight into current applications to accelerate the creation of new data collaboratives. The report outlines dozens of examples, as well as a set of recommendations to enable more systematic, sustainable, and responsible data collaboration….(More)”

The Colombian Anti-Corruption Referendum: Why It Failed?


Paper by Michael Haman: “The objective of this article is to analyze the results of the anti-corruption referendum in Colombia in 2018. Colombia is a country with a significant corruption problem. More than 99% of the voters who came to the polls voted in favor of the proposals. However, the anti-corruption referendum nonetheless failed because not enough citizens were mobilized to participate. The article addresses the reasons why turnout was very low…

Conclusions: I find that the more transparent a municipality, the higher the percentage of the municipal electorate that voted for proposals in the anti-corruption referendum. Moreover, I find that in municipalities where support for Sergio Fajardo in the presidential election was higher and support for Iván Duque was lower, support for the referendum proposals was higher. Also, turnout was lower in municipalities with higher poverty rates and higher homicide rates…(More)”.

Identifying Citizens’ Needs by Combining Artificial Intelligence (AI) and Collective Intelligence (CI)


Report by Andrew Zahuranec, Andrew Young and Stefaan G. Verhulst: “Around the world, public leaders are seeking new ways to better understand the needs of their citizens, and subsequently improve governance, and how we solve public problems. The approaches proposed toward changing public engagement tend to focus on leveraging two innovations. The first involves artificial intelligence (AI), which offers unprecedented abilities to quickly process vast quantities of data to deepen insights into public needs. The second is collective intelligence (CI), which provides means for tapping into the “wisdom of the crowd.” Both have strengths and weaknesses, but little is known on how the combination of both could address their weaknesses while radically transform how we meet public demands for more responsive governance.

Today, The GovLab is releasing a new report, Identifiying Citizens’ Needs By Combining AI and CI, which seeks to identify and assess how institutions might responsibly experiment in how they engage with citizens by leveraging AI and CI together.

The report, authored by Stefaan G. Verhulst, Andrew J. Zahuranec, and Andrew Young, builds upon an initial examination of the intersection of AI and CI conducted in the context of the MacArthur Foundation Research Network on Opening Governance. …

The report features five in-depth case studies and an overview of eight additional examples from around the world on how AI and CI together can help to: 

  • Anticipate citizens’ needs and expectations through cognitive insights and process automation and pre-empt problems through improved forecasting and anticipation;
  • Analyze large volumes of citizen data and feedback, such as identifying patterns in complaints;
  • Allow public officials to create highly personalized campaigns and services; or
  • Empower government service representatives to deliver relevant actions….(More)”.

Restricting data’s use: A spectrum of concerns in need of flexible approaches


Dharma Akmon and Susan Jekielek at IASSIST Quaterly: “As researchers consider making their data available to others, they are concerned with the responsible use of data. As a result, they often seek to place restrictions on secondary use. The Research Connections archive at ICPSR makes available the datasets of dozens of studies related to childcare and early education. Of the 103 studies archived to date, 20 have some restrictions on access. While ICPSR’s data access systems were designed primarily to accommodate public use data (i.e. data without disclosure concerns) and potentially disclosive data, our interactions with depositors reveal a more nuanced notion range of needs for restricting use. Some data present a relatively low risk of threatening participants’ confidentiality, yet the data producers still want to monitor who is accessing the data and how they plan to use them. Other studies contain data with such a high risk of disclosure that their use must be restricted to a virtual data enclave. Still other studies rest on agreements with participants that require continuing oversight of secondary use by data producers, funders, and participants. This paper describes data producers’ range of needs to restrict data access and discusses how systems can better accommodate these needs….(More)”.

Urban Systems Design: From “science for design” to “design in science”


Introduction to Special Issue of Urban Analytics and City Science by Perry PJ Yang and Yoshiki Yamagata: “The direct design of cities is often regarded as impossible, owing to the fluidity, complexity, and uncertainty entailed in urban systems. And yet, we do design our cities, however imperfectly. Cities are objects of our own creation, they are intended landscapes, manageable, experienced and susceptible to analysis (Lynch, 1984). Urban design as a discipline has always focused on “design” in its professional practices. Urban designers tend to ask normative questions about how good city forms are designed, or how a city and its urban spaces ought to be made, thereby problematizing urban form-making and the values entailed. These design questions are analytically distinct from “science”-related research that tends to ask positive questions such as how cities function, or what properties emerge from interactive processes of urban systems. The latter questions require data, analytic techniques, and research methods to generate insight.

This theme issue “Urban Systems Design” is an attempt to outline a research agenda by connecting urban design and systems science, which is grounded in both normative and positive questions. It aims to contribute to the emerging field of urban analytics and city science that is central to this journal. Recent discussions of smart cities inspire urban design, planning and architectural professionals to address questions of how smart cities are shaped and what should be made. What are the impacts of information and communication technologies (ICT) on the questions of how built environments are designed and developed? How would the internet of things (IoT), big data analytics and urban automation influence how humans perceive, experience, use and interact with the urban environment? In short, what are the emerging new urban forms driven by the rapid move to ‘smart cities’?…(More)”.

#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter


Paper by Sarah Oates, and John Gray: “Reports of Russian interference in U.S. elections have raised grave concerns about the spread of foreign disinformation on social media sites, but there is little detailed analysis that links traditional political communication theory to social media analytics. As a result, it is difficult for researchers and analysts to gauge the nature or level of the threat that is disseminated via social media. This paper leverages both social science and data science by using traditional content analysis and Twitter analytics to trace how key aspects of Russian strategic narratives were distributed via #skripal, #mh17, #Donetsk, and #russophobia in late 2018.

This work will define how key Russian international communicative goals are expressed through strategic narratives, describe how to find hashtags that reflect those narratives, and analyze user activity around the hashtags. This tests both how Twitter amplifies specific information goals of the Russians as well as the relative success (or failure) of particular hashtags to spread those messages effectively. This research uses Mentionmapp, a system co-developed by one of the authors (Gray) that employs network analytics and machine intelligence to identify the behavior of Twitter users as well as generate profiles of users via posting history and connections. This study demonstrates how political communication theory can be used to frame the study of social media; how to relate knowledge of Russian strategic priorities to labels on social media such as Twitter hashtags; and to test this approach by examining a set of Russian propaganda narratives as they are represented by hashtags. Our research finds that some Twitter users are consistently active across multiple Kremlin-linked hashtags, suggesting that knowledge of these hashtags is an important way to identify Russian propaganda online influencers. More broadly, we suggest that Twitter dichotomies such as bot/human or troll/citizen should be used with caution and analysis should instead address the nuances in Twitter use that reflect varying levels of engagement or even awareness in spreading foreign disinformation online….(More)”.

The personification of big data


Paper by Stevenson, Phillip Douglas and Mattson, Christopher Andrew: “Organizations all over the world, both national and international, gather demographic data so that the progress of nations and peoples can be tracked. This data is often made available to the public in the form of aggregated national level data or individual responses (microdata). Product designers likewise conduct surveys to better understand their customer and create personas. Personas are archetypes of the individuals who will use, maintain, sell or otherwise be affected by the products created by designers. Personas help designers better understand the person the product is designed for. Unfortunately, the process of collecting customer information and creating personas is often a slow and expensive process.

In this paper, we introduce a new method of creating personas, leveraging publicly available databanks of both aggregated national level and information on individuals in the population. A computational persona generator is introduced that creates a population of personas that mirrors a real population in terms of size and statistics. Realistic individual personas are filtered from this population for use in product development…(More)”.