Stefaan Verhulst
Paper by Morgan E. Currie and Joan M. Donovan: “The purpose of this paper is to expand on emergent data activism literature to draw distinctions between different types of data management practices undertaken by groups of data activists.
The authors offer three case studies that illuminate the data management strategies of these groups. Each group discussed in the case studies is devoted to representing a contentious political issue through data, but their data management practices differ in meaningful ways. The project Making Sense produces their own data on pollution in Kosovo. Fatal Encounters collects “missing data” on police homicides in the USA. The Environmental Data Governance Initiative hopes to keep vulnerable US data on climate change and environmental injustices in the public domain.
In
Book by Yanni Alexander Loukissas: “In our data-driven society, it is too easy to assume the transparency of data. Instead, Yanni Loukissas argues in All Data Are Local, we should approach data sets with an awareness that data are created by humans and their dutiful machines, at a time, in a place, with the instruments at hand, for audiences that are conditioned to receive them. All data are local. The term data set implies something discrete, complete, and portable, but it is none of those things. Examining a series of data sources important for understanding the state of public life in the United States—Harvard’s Arnold Arboretum, the Digital Public Library of America, UCLA’s Television News Archive, and the real estate marketplace Zillow—Loukissas shows us how to analyze data settings rather than data sets.
Loukissas sets out six principles: all data are local; data have complex attachments to place; data are collected from heterogeneous sources; data and algorithms are inextricably entangled; interfaces recontextualize data; and data are indexes to local knowledge. He then provides a set of practical guidelines to follow. To make his argument, Loukissas employs a combination of qualitative research on data cultures and exploratory data visualizations. Rebutting the “myth of digital universalism,” Loukissas reminds us of the meaning-making power of the local….(More)”.
/ˌmɛkəˈnɪstɪk ˈɛvədəns/
Evidence about either the existence or the nature of a causal mechanism connecting the two; in other words, about the entities and activities mediating the XY relationship (Marchionni and Samuli Reijula, 2018).
There has been mounting pressure on policymakers to adopt and expand the concept of evidence-based policy making (EBP).
In 2017, the U.S. Commission on Evidence-Based Policymaking issued a report calling for a future in which “rigorous evidence is created efficiently, as a routine part of government operations, and used to construct effective public policy.” The report asserts that modern technology and statistical methods, “combined with transparency and a strong legal framework, create the opportunity to use data for evidence building in ways that were not possible in the past.”
Similarly, the European Commission’s 2015 report on Strengthening Evidence Based Policy Making through Scientific Advice states that policymaking “requires robust evidence, impact assessment and adequate monitoring and evaluation,” emphasizing the notion that “sound scientific evidence is a key element of the policy-making process, and therefore science advice should be embedded at all levels of the European policymaking process.” That same year, the Commission’s Data4Policy program launched a call for contributions to support its research:
“If policy-making is ‘whatever government chooses to do or not to do’ (Th. Dye), then how do governments actually decide? Evidence-based policy-making is not a new answer to this question, but it is constantly challenging both policy-makers and scientists to sharpen their thinking, their tools and their responsiveness.”
Yet, while the importance and value of EBP are well established, the question of how to establish evidence is often answered by referring to randomized controlled trials (RCTs), cohort studies, or case reports. According to Caterina Marchionni and Samuli Reijula these answers overlook the important concept of mechanistic evidence.
Their paper takes a deeper dive into the differences between statistical and mechanistic evidence:
“It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so.”
The paper further argues that in order to make effective decisions, policymakers must take both statistical and mechanistic evidence into account:
“… whereas statistical studies provide evidence that the policy variable, X, makes a difference to the policy outcome, Y, mechanistic evidence gives information about either the existence or the nature of a causal mechanism connecting the two; in other words, about the entities and activities mediating the XY relationship. Both types of evidence, it is argued, are required to establish causal claims, to design and interpret statistical trials, and to extrapolate experimental findings.”
Ultimately Marchionni and Reijula take a closer look at why introducing research methods that beyond RCTs is crucial for evidence-based policymaking:
“The evidence-based policy (EBP) movement urges policymakers to select policies on the basis of the best available evidence that they work. EBP utilizes evidence-ranking schemes to evaluate the quality of evidence in support of a given policy, which typically prioritize meta-analyses and randomized controlled trials (henceforth RCTs) over other evidence-generating methods.”
They go on to explain that mechanistic evidence has been placed “at the bottom of the evidence hierarchies,” while RCTs have been considered the “gold standard.”

However, the paper argues, mechanistic evidence is in fact as important as statistical evidence:
“… evidence-based policy nearly always involves predictions about the effectiveness of an intervention in populations other than those in which it has been tested. Such extrapolative inferences, it is argued, cannot be based exclusively on the statistical evidence produced by methods higher up in the hierarchies.”
Sources and Further Readings:
- Clarke, Brendan, Gillies, Donald, Illari, Phyllis, Federica Russo, and Jon Williamson. “Mechanisms and the Evidence Hierarchy.” Topoi 33 (2014): 339–360.
- “Evidence-Based Policymaking: What is it? How does it work? What relevance for developing countries?” Overseas Development Institute, 2015.
- Grüne-Yanoff, Till. “Why Behavioural Policy Needs Mechanistic Evidence.” Economics and Philosophy 32 no. 3 (2016).
National Academies: “Scientific research that involves nonscientists contributing to research processes – also known as ‘citizen science’ – supports participants’ learning, engages the public in science, contributes to community scientific literacy, and can serve as a valuable tool to facilitate larger scale research, says a new report from the National Academies of Sciences, Engineering, and Medicine. If one of the goals of a citizen science project is to advance learning, designers should plan for it by defining intended learning outcomes and using evidence-based strategies to reach those outcomes.
“This report affirms that citizen science projects can help participants learn scientific practices and content, but most likely only if the projects are designed to support learning,” says Rajul Pandya, chair of the committee that wrote the report and director, Thriving Earth Exchange, AGU.
The term “citizen science” can be applied to a wide variety of projects that invite nonscientists to engage in doing science with the intended goal of advancing scientific knowledge or application. For example, a citizen science project might engage community members in collecting data to monitor the health of a local stream. As another example, among the oldest continuous organized datasets in the United States are records kept by farmers and agricultural organizations that document the timing of important events, such as sowing, harvests, and pest outbreaks.
Citizen science can support science learning in several ways, the report says. It offers people the opportunity to participate in authentic scientific endeavors, encourages learning through projects conducted in real-world contexts, supports rich social interaction that deepens learning, and engages participants with real data. Citizen science also includes projects that grow out of a community’s desire to address an inequity or advance a priority. For example, the West-Oakland Indicators Project, a community group in Oakland, Calif., self-organizes to collect and analyze air quality data and uses that data to address trucking in and around schools to reduce local children’s exposure to air pollution. When communities can work alongside scientists to advance their priorities, enhanced community science literacy is one possible outcome
In order to maximize learning outcomes, the report recommends that designers and practitioners of citizen science projects should intentionally build them for learning. This involves knowing the audience; intentionally designing for diversity; engaging stakeholders in the design; supporting multiple kinds of participant engagement; encouraging social interaction; building learning supports into the project; and iteratively improving projects through evaluation and refinement. Engaging stakeholders and participants in design and implementation results in more learning for all participants, which can support other project goals.
The report also lays out a research agenda that can help to build the field of citizen science by filling gaps in the current understanding of how citizen science can support science learning and enhance science education. Researchers should consider three important factors: citizen science extends beyond academia and therefore, evidence for practices that advance learning can be found outside of peer-reviewed literature; research should include attention to practice and link theory to application; and attention must be given to diversity in all research, including ensuring broad participation in the design and implementation of the research. Pursuing new lines of inquiry can help add value to the existing research, make future research more productive, and provide support for effective project implementation….(More)”.
Article by John McKenna: “We’ve all heard about donating blood, but how about donating data?
Chronic non-communicable diseases (NCDs) like diabetes, heart disease and epilepsy are predicted by the World Health Organization to account for 57% of all disease by 2020.

This has led some experts to call NCDs the “greatest challenge to global health”.
Could data provide the answer?
Today over 600,000 patients from around the world share data on more than 2,800 chronic diseases to improve research and treatment of their conditions.
People who join the PatientsLikeMe online community share information on everything from their medication and treatment plans to their emotional struggles.
Many of the participants say that it is hugely beneficial just to know there is someone else out there going through similar experiences.
But through its use of data, the platform also has the potential for far more wide-ranging benefits to help improve the quality of life for patients with chronic conditions.
Give data, get data
PatientsLikeMe is one of a swathe of emerging data platforms in the healthcare sector helping provide a range of tech solutions to health problems, including speeding up the process of clinical trials using Real Time Data Analysis or using blockchain to enable the secure sharing of patient data.
Its philosophy is “give data, get data”. In
European Commission: “Today the European Commission published a new study, the eGovernment benchmark report 2018, which demonstrates that the availability and quality of online public services have improved in the EU. Overall there has been significant progress in respect to the efficient use of public information and services online, transparency of government authorities’ operations and users’ control of personal data, cross-border mobility and key enablers, such as the availability of electronic identity cards and other documents.

10 EU countries (Malta, Austria, Sweden, Finland, the Netherlands, Estonia, Lithuania, Latvia, Portugal, Denmark) and Norway are delivering high-quality digital services with a score above 75% on important events of daily life such as moving, finding a job, starting a business or studying. Estonia, Latvia
Further efforts are notably needed in cross-border mobility and digital identification. So far only 6 EU countries have notified their eID means which enables their cross-border recognition….(More) (Report)”
Liam Tung at ZDNet: “An AI-led, road-safety pilot program between analytics firm
Waycare struck a deal with Google-owned Waze earlier this year to “enable cities to communicate back with drivers and warn of dangerous roads, hazards, and incidents ahead”. Waze’s crowdsourced data also feeds into Waycare’s traffic management system, offering more data for cities to manage traffic.
Waycare has now wrapped up a year-long pilot with the Regional Transportation Commission of Southern Nevada (RTC), Nevada Highway Patrol (NHP), and the Nevada Department of Transportation (NDOT).
RTC reports that Waycare helped the city reduce the number of primary crashes by 17 percent along the Interstate 15 Las Vegas.
Bloomberg News: “China’s plan to judge each of its 1.3 billion people based on their social behavior is moving a step closer to reality, with Beijing set to adopt a lifelong points program by 2021 that assigns personalized ratings for each resident.
The capital city will pool data from several departments to reward and punish some 22 million citizens based on their actions and reputations by the end of 2020, according to a plan posted on the Beijing municipal government’s website on Monday. Those with better so-called social credit will get “green channel” benefits while those who violate laws will find life more difficult.
The Beijing project will improve blacklist systems so that those deemed untrustworthy will be “unable to move even a single step,” according to the government’s plan. Xinhua reported on the proposal Tuesday, while the report posted on the municipal government’s website is dated July 18.
China has long experimented with systems that grade its citizens, rewarding good behavior with streamlined services while punishing bad actions with restrictions and penalties. Critics say such moves are fraught with risks and could lead to systems that reduce humans to little more than a report card.
Ambitious Plan
Beijing’s efforts represent the most ambitious yet among more than a dozen cities that are moving ahead with similar programs.
Hangzhou rolled out its personal credit system earlier this year, rewarding “pro-social behaviors” such as volunteer work and blood donations while punishing those who violate traffic laws and charge under-the-table fees. By the end of May, people with bad credit in China have been blocked from booking more than 11 million flights and 4 million high-speed train trips, according to the National Development and Reform Commission.
According to the Beijing government’s plan, different agencies will link databases to get a more detailed picture of every resident’s interactions across a swathe of services
Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury at MIT Sloan Management Review: “Artificial intelligence has had some justifiably bad press recently. Some of the worst stories have been about systems that exhibit racial or gender bias in facial recognition applications or in evaluating people for jobs, loans, or other considerations. One program was routinely recommending longer prison sentences for blacks than for whites on the basis of the flawed use of recidivism data.
But what if instead of perpetuating harmful biases, AI helped us overcome them and make fairer decisions? That could eventually result in a more diverse and inclusive world. What if, for instance, intelligent machines could help organizations recognize all worthy job candidates by avoiding the usual hidden prejudices that derail applicants who don’t look or sound like those in power or who don’t have the “right” institutions listed on their résumés? What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans? In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand?
AI can do all of this — with guidance from the human experts who create, train, and refine its systems. Specifically, the people working with the technology must do a much better job of building inclusion and diversity into AI design by using the right data to train AI systems to be inclusive and thinking about gender roles and diversity when developing bots and other applications that engage with the public.
Design for Inclusion
Software development remains the province of males — only about one-quarter of computer scientists in the United States are women— and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too. Groups like Girls Who Code and AI4ALL have been founded to help close those gaps. Girls Who Code has reached almost 90,000 girls from various backgrounds in all 50 states,5 and AI4ALL specifically targets girls in minority communities….(More)”.
Paper by Marianne Boenink, Lieke van der Scheer, Elisa Garcia and Simone van der Burg in NanoEthics: “Biomedical research policy in recent years has often tried to make such research more ‘translational’, aiming to facilitate the transfer of insights from research and development (R&D) to health care for the benefit of future users. Involving patients in deliberations about and design of biomedical research may increase the quality of R&D and of resulting innovations and thus contribute to translation. However, patient involvement in biomedical research is not an easy feat. This paper discusses the development of a method for involving patients in (translational) biomedical research aiming to address its main challenges.
After reviewing the potential challenges of patient involvement, we formulate three requirements for any method to meaningfully involve patients in (translational) biomedical research. It should enable patients (1) to put forward their experiential knowledge, (2) to develop a rich view of what an envisioned innovation might look like and do, and (3) to connect their experiential knowledge with the envisioned innovation. We then describe how we developed the card-based discussion method ‘Voice of patients’, and discuss to what extent the method, when used in four focus groups, satisfied these requirements. We conclude that the method is quite successful in mobilising patients’ experiential knowledge, in stimulating their imaginaries of the innovation under discussion and to some extent also in connecting these two. More work is needed to translate patients’ considerations into recommendations relevant to researchers’ activities. It also seems wise to broaden the audience for patients’ considerations to other actors working on a specific innovation….(More)”