How public should science be?


Discussion Report by Edel, A., Kübler: “Since the outbreak of the COVID-19 pandemic, the question of what role science should play in political discourse has moved into the focus of public interest with unprecedented vehemence. In addition to governments directly consulting individual virologists or (epidemiological) research institutes, major scientific institutions such as the German National Academy of Sciences Leopoldina1 and the presidents of four non-university research organisations have actively participated in the discussion by providing recommendations. More than ever before, scientific problem descriptions, data and evaluations are influencing political measures. It seems as if the relationship between science, politics and the public is currently being reassessed.

The current crisis situation has not created a new phenomenon but has only reinforced the trend of mutual reliance between science, politics and the public, which has been observed for some time. Decision-makers in the political arena and in business were already looking for ways to better substantiate and legitimise their decisions through external scientific expertise when faced with major societal challenges, for example when trying to deal with increasing immigration, climate protection and when preparing for far-reaching reforms (e.g. of the labour market or the pension system) or in economic crises. Research is also held in high esteem within society. The special edition of the ‘Science Barometer’ was able to demonstrate in the surveys an increased trust in science in the case of the current COVID-19 pandemic. Conversely, scientists have always been and continue to be active in the public sphere. For some time now, research experts have frequently been guests on talk shows. Authors from the field of science often write opinion pieces and guest contributions in daily newspapers and magazines. However, this role of research is by no means un-controversial….(More)”.

Scholarly publishing needs regulation


Essay by Jean-Claude Burgelman: “The world of scientific communication has changed significantly over the past 12 months. Understandably, the amazing mobilisation of research and scholarly publishing in an effort to mitigate the effects of Covid-19 and find a vaccine has overshadowed everything else. But two other less-noticed events could also have profound implications for the industry and the researchers who rely on it.

On 10 January 2020, Taylor and Francis announced its acquisition of one of the most innovative small open-access publishers, F1000 Research. A year later, on 5 January 2021, another of the big commercial scholarly publishers, Wiley, paid nearly $300 million for Hindawi, a significant open-access publisher in London.

These acquisitions come alongside rapid change in publishers’ functions and business models. Scientific publishing is no longer only about publishing articles. It’s a knowledge industry—and it’s increasingly clear it needs to be regulated like one.

The two giant incumbents, Springer Nature and Elsevier, are already a long way down the road to open access, and have built up impressive in-house capacity. But Wiley, and Taylor and Francis, had not. That’s why they decided to buy young open-access publishers. Buying up a smaller, innovative competitor is a well-established way for an incumbent in any industry to expand its reach, gain the ability to do new things and reinvent its business model—it’s why Facebook bought WhatsApp and Instagram, for example.

New regulatory approach

To understand why this dynamic demands a new regulatory approach in scientific publishing, we need to set such acquisitions alongside a broader perspective of the business’s transformation into a knowledge industry. 

Monopolies, cartels and oligopolies in any industry are a cause for concern. By reducing competition, they stifle innovation and push up prices. But for science, the implications of such a course are particularly worrying. 

Science is a common good. Its products—and especially its spillovers, the insights and applications that cannot be monopolised—are vital to our knowledge societies. This means that having four companies control the worldwide production of car tyres, as they do, has very different implications to an oligopoly in the distribution of scientific outputs. The latter situation would give the incumbents a tight grip on the supply of knowledge.

Scientific publishing is not yet a monopoly, but Europe at least is witnessing the emergence of an oligopoly, in the shape of Elsevier, Springer Nature, Wiley, and Taylor and Francis. The past year’s acquisitions have left only two significant independent players in open-access publishing—Frontiers and MDPI, both based in Switzerland….(More)”.

An Open Data Team Experiments with a New Way to Tell City Stories


Article by  Sean Finnan: “Can you see me?” says Mark Linnane, over Zoom, as he walks around a plastic structure on the floor of an office at Maynooth University. “That gives you some sense of the size of it. It’s 3.5 metres by 2.”

Linnane trails his laptop’s webcam over the surface of the off-white 3D model, giving a birds-eye view of tens of thousands of tiny buildings, the trails of roads and the clear pathway of the Liffey.

This replica of the heart of the city from Phoenix Park to Dublin Port was created to scale by the university’s Building City Dashboards team, using data from the Ordnance Survey Ireland.

In the five years since they started to grapple with the question of how to present data about the city in an engaging and accessible way, the team has experimented with virtual reality, and augmented reality – and most recently, with this new form of mapping, which blends the lego-like miniature of Dublin’s centre with changeable data projected on.

This could really come into its own as a public exhibit if they start to tell meaningful data-driven and empirical stories, says Linnane, a digital exhibition developer at Maynooth University.

Stories that are “relevant in terms of the everyday daily lives of people who will be coming to see it”, he says.

Layers of Meaning

Getting the projector that throws the visualisations onto the model to work right was Linnane’s job, he says.

He had to mesh the Ordnance Survey data with others that showed building heights for example. “Every single building down to the sheds in someone’s garden have a unique identifier,” says Linnane.

Projectors are built to project onto flat surfaces and not 3D models so that had to be finessed, too, he says. “Every step on the way was a new development. There wasn’t really a process there before.”

The printed 3D model shows 7km by 4km of Dublin and 122,355 structures, says Linnane. That includes bigger buildings but also small outbuildings, railway platforms, public toilets and glasshouses – all mocked up and serving as a canvas for a kaleidoscope of data.

“We’re just projecting data on to it and seeing what’s going on with that,” says Rob Kitchin, principal investigator at Maynooth University’s Programmable City project….(More)”

Image of model courtesy of Mark Linnane.

When FOIA Goes to Court: 20 Years of Freedom of Information Act Litigation by News Organizations and Reporters


Report by The FOIA Project: “The news media are powerful players in the world of government transparency and public accountability. One important tool for ensuring public accountability is through invoking transparency mandates provided by the Freedom of Information Act (FOIA). In 2020, news organizations and individual reporters filed 122 different FOIA suits[1] to compel disclosure of federal government records—more than any year on record according to federal court data back to 2001 analyzed by the FOIA Project

In fact, the media alone have filed a total of 386 FOIA cases during the four years of the Trump Administration, from 2017 through 2020. This is greater than the total of 311 FOIA media cases filed during the sixteen years of the Bush and Obama Administrations combined. Moreover, many of these FOIA cases were the very first FOIA cases filed by members of the news media. Almost as many new FOIA litigators filed their first case in court in the past four years—178 from 2017 to 2020—than the years 2001 to 2016, when 196 FOIA litigators filed their first case. Reporters made up the majority of these. During the past four years, more than four out of five of first-time litigators were individual reporters. The ranks of FOIA litigators thus expanded considerably during the Trump Administration, with more reporters challenging agencies in court for failing to provide records they are seeking, either alone or with their news organizations.

Using the FOIA Project’s unique dataset of FOIA cases filed in federal court, this report provides unprecedented and valuable insight into the rapid growth of media lawsuits designed to make the government more transparent and accountable to the public. The complete, updated list of news media cases, along with the names of organizations and reporters who filed these suits, is available on the News Media List at FOIAProject.org. Figure 1shows the total number of FOIA cases filed by the news each year. Counts are available in Appendix Table 1 at the end of this report….(More)”.

Figure 1. Freedom of Information Act (FOIA) Cases Filed by News Organizations and Reporters in Federal Court, 2001–2020.

Can open data increase younger generations’ trust in democratic institutions? A study in the European Union


Paper by Nicolás Gonzálvez-Gallego and Laura Nieto-Torrejón: “Scholars and policy makers are giving increasing attention to how young people are involved in politics and their confidence in the current democratic system. In a context of a global trust crisis in the European Union, this paper examines if open government data, a promising governance strategy, may help to boost Millennials’ and Generation Z trust in public institutions and satisfaction with public outcomes. First, results from our preliminary analysis challenge some popular beliefs by revealing that younger generations tend to trust in their institutions notably more than the rest of the European citizens. In addition, our findings show that open government data is a trust-enabler for Millennials and Generation Z, not only through a direct link between both, but also thanks to the mediator role of citizens’ satisfaction. Accordingly, public officers are encouraged to spread the implementation of open data strategies as a way to improve younger generations’ attachment to democratic institutions….(More)”.

Lawmakers are trying to create a database with free access to court records. Judges are fighting against it.


Ann Marimow in the Washington Post: “Leaders of the federal judiciary are working to block bipartisan legislation designed to create a national database of court records that would provide free access to case documents.

Backers of the bill, who are pressing for a House vote in the coming days, envision a streamlined, user-friendly system that would allow citizens to search for court documents and dockets without having to pay. Under the current system, users pay 10 cents per page to view the public records through the service known as PACER, an acronym for Public Access to Court Electronic Records.

“Everyone wants to have a system that is technologically first class and free,” said Rep. Hank Johnson (D-Ga.), a sponsor of the legislation with Rep. Douglas A. Collins (R-Ga.).

A modern system, he said, “is more efficient and brings more transparency into the equation and is easier on the pocketbooks of regular people.”…(More)”.

Open Data Inventory 2020


Report by the Open Data Watch: “The 2020/21 Open Data Inventory (ODIN) is the fifth edition of the index compiled by Open Data Watch. ODIN 2020/21 provides an assessment of the coverage and openness of official statistics in 187 countries, an increase of 9 countries compared to ODIN 2018/19. The year 2020 was a challenging year for the world as countries grappled with the COVID-19 pandemic. Nonetheless, and despite the pandemic’s negative impact on the capacity of statistics producers, 2020 saw great progress in open data.

However, the news on data this year isn’t all good. Countries in every region still struggle to publish gender data and many of the same countries are unable to provide sex-disaggregated data on the COVID-19 pandemic. In addition, low-income countries continue to need more support with capacity building and financial resources to overcome the barriers to publishing open data.

ODIN is an evaluation of the coverage and openness of data provided on the websites maintained by national statistical offices (NSOs) and any official government website that is accessible from the NSO site. The overall ODIN score is an indicator of how complete and open an NSO’s data offerings are. It is comprised of both a coverage and openness subscore. Openness is measured against standards set by the Open Definition and Open Data Charter. ODIN 2020/21 includes 22 data categories, grouped under social, economic and financial, and environmental statistics. ODIN scores are represented on a range between 0 and 100, with 100 representing the best performance on open data… The full report will be released in February 2021….(More)”.

Open government data, uncertainty and coronavirus: An infodemiological case study


Paper by Nikolaos Yiannakoulias, Catherine E. Slavik, Shelby L. Sturrock, J. Connor Darlington: “Governments around the world have made data on COVID-19 testing, case numbers, hospitalizations and deaths openly available, and a breadth of researchers, media sources and data scientists have curated and used these data to inform the public about the state of the coronavirus pandemic. However, it is unclear if all data being released convey anything useful beyond the reputational benefits of governments wishing to appear open and transparent. In this analysis we use Ontario, Canada as a case study to assess the value of publicly available SARS-CoV-2 positive case numbers. Using a combination of real data and simulations, we find that daily publicly available test results probably contain considerable error about individual risk (measured as proportion of tests that are positive, population based incidence and prevalence of active cases) and that short term variations are very unlikely to provide useful information for any plausible decision making on the part of individual citizens. Open government data can increase the transparency and accountability of government, however it is essential that all publication, use and re-use of these data highlight their weaknesses to ensure that the public is properly informed about the uncertainty associated with SARS-CoV-2 information….(More)”

Responsible Data Re-Use for COVID19


” The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, with support from the Henry Luce Foundation, today released guidance to inform decision-making in the responsible re-use of data — re-purposing data for a use other than that for which it was originally intended — to address COVID-19. The findings, recommendations, and a new Responsible Data Re-Use framework stem from The Data Assembly initiative in New York City. An effort to solicit diverse, actionable public input on data re-use for crisis response in the United States, the Data Assembly brought together New York City-based stakeholders from government, the private sector, civic rights and advocacy organizations, and the general public to deliberate on innovative, though potentially risky, uses of data to inform crisis response in New York City. The findings and guidance from the initiative will inform policymaking and practice regarding data re-use in New York City, as well as free data literacy training offerings.

The Data Assembly’s Responsible Data Re-Use Framework provides clarity on a major element of the ongoing crisis. Though leaders throughout the world have relied on data to reduce uncertainty and make better decisions, expectations around the use and sharing of siloed data assets has remained unclear. This summer, along with the New York Public Library and Brooklyn Public Library, The GovLab co-hosted four months of remote deliberations with New York-based civil rights organizations, key data holders, and policymakers. Today’s release is a product of these discussions, to show how New Yorkers and their leaders think about the opportunities and risks involved in the data-driven response to COVID-19….(More)”

See: The Data Assembly Synthesis Report by y Andrew Young, Stefaan G. Verhulst, Nadiya Safonova, and Andrew J. Zahuranec

Leveraging Open Data with a National Open Computing Strategy


Policy Brief by Lara Mangravite and John Wilbanks: “Open data mandates and investments in public data resources, such as the Human Genome Project or the U.S. National Oceanic and Atmospheric Administration Data Discovery Portal, have provided essential data sets at a scale not possible without government support. By responsibly sharing data for wide reuse, federal policy can spur innovation inside the academy and in citizen science communities. These approaches are enabled by private-sector advances in cloud computing services and the government has benefited from innovation in this domain. However, the use of commercial products to manage the storage of and access to public data resources poses several challenges.

First, too many cloud computing systems fail to properly secure data against breaches, improperly share copies of data with other vendors, or use data to add to their own secretive and proprietary models. As a result, the public does not trust technology companies to responsibly manage public data—particularly private data of individual citizens. These fears are exacerbated by the market power of the major cloud computing providers, which may limit the ability of individuals or institutions to negotiate appropriate terms. This impacts the willingness of U.S. citizens to have their personal information included within these databases.

Second, open data solutions are springing up across multiple sectors without coordination. The federal government is funding a series of independent programs that are working to solve the same problem, leading to a costly duplication of effort across programs.

Third and most importantly, the high costs of data storage, transfer, and analysis preclude many academics, scientists, and researchers from taking advantage of governmental open data resources. Cloud computing has radically lowered the costs of high-performance computing, but it is still not free. The cost of building the wrong model at the wrong time can quickly run into tens of thousands of dollars.

Scarce resources mean that many academic data scientists are unable or unwilling to spend their limited funds to reuse data in exploratory analyses outside their narrow projects. And citizen scientists must use personal funds, which are especially scarce in communities traditionally underrepresented in research. The vast majority of public data made available through existing open science policy is therefore left unused, either as reference material or as “foreground” for new hypotheses and discoveries….The Solution: Public Cloud Computing…(More)”.