Stefaan Verhulst
Alice Meadows at the Scholarly Kitchen: “In this interview, Joris van Rossum (Director of Special Projects, Digital Science) and author of Blockchain for Research, and Martijn Roelandse (Head of Publishing Innovation, Springer Nature), discuss blockchain in scholarly communications, including the recently launched Peer Review Blockchain initiative….
How would you describe blockchain in one sentence?
Joris: Blockchain is a technology for decentralized, self-regulating data which can be managed and organized in a revolutionary new way: open, permanent, verified and shared, without the need of a central authority.
How does it work (in layman’s language!)?
Joris: In a regular database you need a gatekeeper to ensure that whatever is stored in a database (financial transactions, but this could be anything) is valid. However with blockchain, trust is not created by means of a curator, but through consensus mechanisms and cryptographic techniques. Consensus mechanisms clearly define what new information is allowed to be added to the datastore. With the help of a technology called hashing, it is not possible to change any existing data without this being detected by others. And through cryptography, the database can be shared without real identities being revealed. So the blockchain technology removes the need for a middle-man.
How is this relevant to scholarly communication?
Joris: It’s very relevant. We’ve explored the possibilities and initiatives in a report published by Digital Science. The blockchain could be applied on several levels, which is reflected in a number of initiatives announced recently. For example, a cryptocurrency for science could be developed. This ‘bitcoin for science’ could introduce a monetary reward scheme to researchers, such as for peer review. Another relevant area, specifically for publishers, is digital rights management. The potential for this was picked up by this blog at a very early stage. Blockchain also allows publishers to easily integrate micropayments, thereby creating a potentially interesting business model alongside open access and subscriptions.
Moreover, blockchain as a datastore with no central owner where information can be stored pseudonymously could support the creation of a shared and authoritative database of scientific events. Here traditional activities such as publications and citations could be stored, along with currently opaque and unrecognized activities, such as peer review. A data store incorporating all scientific events would make science more transparent and reproducible, and allow for more comprehensive and reliable metrics….
How do you see developments in the industry regarding blockchain?
Joris: In the last couple of months we’ve seen the launch of many interesting initiatives. For example scienceroot.com. Pluto.network, and orvium.io. These are all ambitious projects incorporating many of the potential applications of blockchain in the industry, and to an extent aim to disrupt the current ecosystem. Recently artifacts.ai was announced, an interesting initiative that aims to allow researchers to permanently document every stage of the research process. However, we believe that traditional players, and not least publishers, should also look at how services to researchers can be improved using blockchain technology. There are challenges (e.g. around reproducibility and peer review) but that does not necessarily mean the entire ecosystem needs to be overhauled. In fact, in academic publishing we have a good track record of incorporating new technologies and using them to improve our role in scholarly communication. In other words, we should fix the system, not break it!
What is the Peer Review Blockchain initiative, and why did you join?
Martijn: The problems of research reproducibility, recognition of reviewers, and the rising burden of the review process, as research volumes increase each year, have led to a challenging landscape for scholarly communications. There is an urgent need for change to tackle the problems which is why we joined this initiative, to be able to take a step forward towards a fairer and more transparent ecosystem for peer review. The initiative aims to look at practical solutions that leverage the distributed registry and smart contract elements of blockchain technologies. Each of the parties can deposit peer review activity in the blockchain — depending on peer review type, either partially or fully encrypted — and subsequent activity is also deposited in the reviewer’s ORCID profile. These business transactions — depositing peer review activity against person x — will be verifiable and auditable, thereby increasing transparency and reducing the risk of manipulation. Through the shared processes we will setup with other publishers, and recordkeeping, trust will increase.
A separate trend we see is the broadening scope of research evaluation which triggered researchers to also get (more) recognition for their peer review work, beyond citations and altmetrics. At a later stage new applications could be built on top of the peer review blockchain….(More)”.
Springwise: “We have already seen how technology can be harnessed to help facilitate charitable and environmental efforts. For example, the recycling organization which helps businesses rehome unwanted goods, donating money to charity in addition to helping businesses be more economical. Another example in which technology has been used to raise awareness is through the charity chatbot, which teaches users about women’s daily journey to find water in Ethiopia.
JoodLife is a start-up which aims to make the most of technology and take advantage of it in order to help voluntary efforts in Jordan.
The application works as a social platform to connect volunteers and donors in order to facilitate charity work. Donors can register their donations via the app, and then all the available grants are displayed. The grants can be searched for on the app, and users can specify the area they wish to search. The donor and the volunteer can then agree a mechanism by which they wish to transfer the grant. At which point the available grant will no longer be shown on the app search. The app aims to serve as a link between donors and volunteers to save both parties time and effort. This makes it much easier to make monetary and material donations. The social aspect of the app also increases solidarity between charity workers and makes it much simpler to distribute roles in the most efficient way….(More)”.
DrexelNow: “…More than 40 percent of Philly nonprofit organizations operate on margins of zero or less, and fewer can be considered financially strong. With more than half of Philly’s nonprofits operating on a slim-to-none budget with limited support staff – one Drexel University researcher sought to help streamline their fundraising process by giving them easy access to data from the Internal Revenue Service and the U.S. Census. His goal: Create a tool that makes information about nonprofit organizations, and the communities they’re striving to help, more accessible to likeminded charities and the philanthropic organizations that seek to fund them.
When the IRS recently released millions of records on the finances and operations of nonprofit organizations in format that can be downloaded and analyzed, it was expected that this would usher in a new era of transparency and innovation for the nonprofit sector. Instead, many technical issues made the data virtually unusable by nonprofit organizations.
Neville Vakharia, an assistant professor and research director in Drexel’s graduate Arts Administration program in the Westphal College of Media Arts & Design, tackled this issue by creating ImpactView Philadelphia, an online tool and resource that uses the publicly available data on nonprofit organizations to present an easy-to-access snapshot of Philadelphia’s nonprofit ecosystem.
Vakharia combined the publicly available data from the IRS with the most recent American Community Survey data released by the U.S. Census Bureau. These data were combined with a map of Philadelphia to create a visual database easily searchable by organization, address or zip code. Once an organization is selected, the analysis tools allow the user to see data on the map, alongside measures of households and individuals surrounding the organization — important information for nonprofits to have when they are applying for grants or looking for partners.
“Through the location intelligence visualizer, users can immediately find areas of need and potential collaborators. The data are automatically visualized and mapped on-screen, identifying, for example, pockets of high poverty with large populations of children as well as the nonprofit service providers in these areas,” said Vakharia. “Making this data accessible for nonprofits will cut down on time spent seeking information and improve the ability to make data-informed decisions, while also helping with case making and grant applications.”…(More)”.
Barbara Romzek and Aram Sinnreich at The Conversation: “…For years, watchdogs have been warning about sharing information with data-collecting companies, firms engaged in the relatively new line of business called some academics have called “surveillance capitalism.” Most casual internet users are only now realizing how easy – and common – it is for unaccountable and unknown organizations to assemble detailed digital profiles of them. They do this by combining the discrete bits of information consumers have given up to e-tailers, health sites, quiz apps and countless other digital services.
As scholars of public accountability and digital media systems, we know that the business of social media is based on extracting user data and offering it for sale. There’s no simple way for them to protect data as many users might expect. Like the social pollution of fake news, bullying and spam that Facebook’s platform spreads, the company’s privacy crisis also stems from a power imbalance: Facebook knows nearly everything about its users, who know little to nothing about it.
It’s not enough for people to delete their Facebook accounts. Nor is it likely that anyone will successfully replace it with a nonprofit alternativecentering on privacy, transparency and accountability. Furthermore, this problem is not specific just to Facebook. Other companies, including Google and Amazon, also gather and exploit extensive personal data, and are locked in a digital arms race that we believe threatens to destroy privacy altogether….
Governments need to be better guardians of public welfare – including privacy. Many companies using various aspects of technology in new ways have so far avoided regulation by stoking fears that rules might stifle innovation. Facebook and others have often claimed that they’re better at regulating themselves in an ever-changing environment than a slow-moving legislative process could be….
To encourage companies to serve democratic principles and focus on improving people’s lives, we believe the chief business model of the internet needs to shift to building trust and verifying information. While it won’t be an immediate change, social media companies pride themselves on their adaptability and should be able to take on this challenge.
The alternative, of course, could be far more severe. In the 1980s, when federal regulators decided that AT&T was using its power in the telephone market to hurt competition and consumers, they forced the massive conglomerate to break up. A similar but less dramatic change happened in the early 2000s when cellphone companies were forced to let people keep their phone numbers even if they switched carriers.
Data, and particularly individuals’ personal data, are the precious metals of the internet age. Protecting individual data while expanding access to the internet and its many social benefits is a fundamental challenge for free societies. Creating, using and protecting data properly will be crucial to preserving and improving human rights and civil liberties in this still young century. To meet this challenge will require both vigilance and vision, from businesses and their customers, as well as governments and their citizens….(More).
James Somers in The Atlantic: “The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.
The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.
The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.
Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.
What would you get if you designed the scientific paper from scratch today?…(More).
Handbook by the Government of New Zealand: “…helps you take a structured approach to using evidence at every stage of the policy and programme development cycle. Whether you work for central or local government, or the community and voluntary sector, you’ll find advice to help you:
- understand different types and sources of evidence
- know what you can learn from evidence
- appraise evidence and rate its quality
- decide how to select and use evidence to the best effect
- take into account different cultural values and knowledge systems
- be transparent about how you’ve considered evidence in your policy development work…(More)”
(See also Summary; This handbook is a companion to Making sense of evaluation: A handbook for everyone.).
Report by AINow Institute: “Automated decision systems are currently being used by public agencies, reshaping how criminal justice systems work via risk assessment algorithms1 and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment4 and educational systems through automated evaluation tools and matching algorithms.Researchers, advocates, and policymakers are debating when and where automated decision systems are appropriate, including whether they are appropriate at all in particularly sensitive domains.
Questions are being raised about how to fully assess the short and long term impacts of these systems, whose interests they serve, and if they are sufficiently sophisticated to contend with complex social and historical contexts. These questions are essential, and developing strong answers has been hampered in part by a lack of information and access to the systems under deliberation. Many such systems operate as “black boxes” – opaque software tools working outside the scope of meaningful scrutiny and accountability.8 This is concerning, since an informed policy debate is impossible without the ability to understand which existing systems are being used, how they are employed, and whether these systems cause unintended consequences. The Algorithmic Impact Assessment (AIA) framework proposed in this report is designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and to determine where – or if – their use is acceptable….
KEY ELEMENTS OF A PUBLIC AGENCY ALGORITHMIC IMPACT ASSESSMENT
1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;
2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;
3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;
4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and
5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct….(More)”.
White Paper by the World Economic Forum: “For individuals, legal entities and devices alike, a verifiable and trusted identity is necessary to interact and transact with others.
The concept of identity isn’t new – for much of human history, we have used evolving credentials, from beads and wax seals to passports, ID cards and birth certificates, to prove who we are. The issues associated with identity proofing – fraud, stolen credentials and social exclusion – have challenged individuals throughout history. But, as the spheres in which we live and transact have grown, first geographically and now into the digital economy, the ways in which humans, devices and other entities interact are quickly evolving – and how we manage identity will have to change accordingly.
As we move into the Fourth Industrial Revolution and more transactions are conducted digitally, a digital representation of one’s identity has become increasingly important; this applies to humans, devices, legal entities and beyond. For humans, this proof of identity is a fundamental prerequisite to access critical services and participate in modern economic, social and political systems. For devices, their digital identity is critical in conducting transactions, especially as the devices will be able to transact relatively independent of humans in the near future. For legal entities, the current state of identity management consists of inefficient manual processes that could benefit from new technologies and architecture to support digital growth.
As the number of digital services, transactions and entities grows, it will be increasingly important to ensure the transactions take place in a secure and trusted network where each entity can be identified and authenticated. Identity is the first step of every transaction between two or more parties.
Over the ages, the majority of transactions between two identities has been mostly viewed in relation to the validation of a credential (“Is this genuine information?”), verification (“Does the information match the identity?”) and authentication of an identity (“Does this human/thing match the identity? Are you really who you claim to be?”). These questions have not changed over time, only the methods have change. This paper explores the challenges with current identity systems and the trends that will have significant impact on identity in the future….(More)”.
Working Paper by Gary King and Nathaniel Persily: “The mission of the academic social sciences is to understand and ameliorate society’s greatest challenges. The data held by private companies holds vast potential to further this mission. Yet, because of its interaction with highly politicized issues, customer privacy, proprietary content, and differing goals of firms and academics, these data are often inaccessible to university researchers.
We propose here a new model for industry-academic partnerships that addresses these problems via a novel organizational structure: Respected scholars form a commission which, as a trusted third party, receives access to all relevant firm information and systems, and then recruits independent academics to do research in specific areas following standard peer review protocols organized and funded by nonprofit foundations.
We also report on a partnership we helped forge under this model to make data available about the extremely visible and highly politicized issues surrounding the impact of social media on elections and democracy. In our partnership, Facebook will provide privacy-preserving data and access; seven major politically and substantively diverse nonprofit foundations will fund the research; and the Social Science Research Council will oversee the peer review process for funding and data access….(More)”.
This book brings together the theory and practice of managing public trust. It examines the current state of public trust, including a comprehensive global overview of both the research and practical applications of managing public trust by presenting research from seven countries (Brazil, Finland, Poland, Hungary, Portugal, Taiwan, Turkey) from three continents. The book is divided into five parts, covering the meaning of trust, types, dimension and the role of trust in management; the organizational challenges in relation to public trust; the impact of social media on the development of public trust; the dynamics of public trust in business; and public trust in different cultural contexts….(More)”.