Preprints: The What, The Why, The How.


Center for Open Science: “The use of preprint servers by scholarly communities is definitely on the rise. Many developments in the past year indicate that preprints will be a huge part of the research landscape. Developments with DOIs, changes in funder expectations, and the launch of many new services indicate that preprints will become much more pervasive and reach beyond the communities where they started.

From funding agencies that want to realize impact from their efforts sooner to researchers’ desire to disseminate their research more quickly, the growth of these servers and the number of works being shared, has been substantial. At COS, we already host twenty different organizations’ services via the OSF Preprints platform.

So what’s a preprint and what is it good for? A preprint is a manuscript submitted to a  dedicated repository (like OSF PreprintsPeerJbioRxiv or arXiv) prior to peer review and formal publication. Some of those repositories may also accept other types of research outputs, like working papers and posters or conference proceedings. Getting a preprint out there has a variety of benefits for authors other stakeholders in the research:

  • They increase the visibility of research, and sooner. While traditional papers can languish in the peer review process for months, even years, a preprint is live the minute it is submitted and moderated (if the service moderates). This means your work gets indexed by Google Scholar and Altmetric, and discovered by more relevant readers than ever before.
  • You can get feedback on your work and make improvements prior to journal submission. Many authors have publicly commented about the recommendations for improvements they’ve received on their preprint that strengthened their work and even led to finding new collaborators.
  • Papers with an accompanying preprint get cited 30% more often than papers without. This research from PeerJsums it up, but that’s a big benefit for scholars looking to get more visibility and impact from their efforts.
  • Preprints get a permanent DOI, which makes them part of the freely accessible scientific record forever. This means others can relay on that permanence when citing your work in their research. It also means that your idea, developed by you, has a “stake in the ground” where potential scooping and intellectual theft are concerned.

So, preprints can really help lubricate scientific progress. But there are some things to keep in mind before you post. Usually, you can’t post a preprint of an article that’s already been submitted to a journal for peer review. Policies among journals vary widely, so it’s important to check with the journal you’re interested in sending your paper to BEFORE you submit a preprint that might later be published. A good resource for doing this is JISC’s SHERPA/RoMEO database. It’s also a good idea to understand the licensing choices available. At OSF Preprints, we recommend the CC-BY license suite, but you can check choosealicense.com or https://osf.io/6uupa/ for good overviews on how best to license your submissions….(More)”.

Research Shows Political Acumen, Not Just Analytical Skills, is Key to Evidence-Informed Policymaking


Press Release: “Results for Development (R4D) has released a new study unpacking how evidence translators play a key and somewhat surprising role in ensuring policymakers have the evidence they need to make informed decisions. Translators — who can be evidence producers, policymakers, or intermediaries such as journalists, advocates and expert advisors — identify, filter, interpret, adapt, contextualize and communicate data and evidence for the purposes of policymaking.

The study, Translators’ Role in Evidence-Informed Policymaking, provides a better understanding of who translators are and how different factors influence translators’ ability to promote the use of evidence in policymaking. This research shows translation is an essential function and that, absent individuals or organizations taking up the translator role, evidence translation and evidence-informed policymaking often do not take place.

“We began this research assuming that translators’ technical skills and analytical prowess would prove to be among the most important factors in predicting when and how evidence made its way into public sector decision making,” Nathaniel Heller, executive vice president for integrated strategies at Results for Development, said. “Surprisingly, that turned out not to be the case, and other ‘soft’ skills play a far larger role in translators’ efficacy than we had imagined.”

Key findings include:

  • Translator credibility and reputation are crucial to the ability to gain access to policymakers and to promote the uptake of evidence.
  • Political savvy and stakeholder engagement are among the most critical skills for effective translators.
  • Conversely, analytical skills and the ability to adapt, transform and communicate evidence were identified as being less important stand-alone translator skills.
  • Evidence translation is most effective when initiated by those in power or when translators place those in power at the center of their efforts.

The study includes a definitional and theoretical framework as well as a set of research questions about key enabling and constraining factors that might affect evidence translators’ influence. It also focuses on two cases in Ghana and Argentina to validate and debunk some of the intellectual frameworks around policy translators that R4D and others in the field have already developed. The first case focuses on Ghana’s blue-ribbon commission formed by the country’s president in 2015, which was tasked with reviewing Ghana’s national health insurance scheme. The second case looks at Buenos Aires’ 2016 government-led review of the city’s right to information regime….(More)”.

Ontario is trying a wild experiment: Opening access to its residents’ health data


Dave Gershorn at Quartz: “The world’s most powerful technology companies have a vision for the future of healthcare. You’ll still go to your doctor’s office, sit in a waiting room, and explain your problem to someone in a white coat. But instead of relying solely on their own experience and knowledge, your doctor will consult an algorithm that’s been trained on the symptoms, diagnoses, and outcomes of millions of other patients. Instead of a radiologist reading your x-ray, a computer will be able to detect minute differences and instantly identify a tumor or lesion. Or at least that’s the goal.

AI systems like these, currently under development by companies including Google and IBM, can’t read textbooks and journals, attend lectures, and do rounds—they need millions of real life examples to understand all the different variations between one patient and another. In general, AI is only as good as the data it’s trained on, but medical data is exceedingly private—most developed countries have strict health data protection laws, such as HIPAA in the United States….

These approaches, which favor companies with considerable resources, are pretty much the only way to get large troves of health data in the US because the American health system is so disparate. Healthcare providers keep personal files on each of their patients, and can only transmit them to other accredited healthcare workers at the patient’s request. There’s no single place where all health data exists. It’s more secure, but less efficient for analysis and research.

Ontario, Canada, might have a solution, thanks to its single-payer healthcare system. All of Ontario’s health data exists in a few enormous caches under government control. (After all, the government needs to keep track of all the bills its paying.) Similar structures exist elsewhere in Canada, such as Quebec, but Toronto, which has become a major hub for AI research, wants to lead the charge in providing this data to businesses.

Until now, the only people allowed to study this data were government organizations or researchers who partnered with the government to study disease. But Ontario has now entrusted the MaRS Discovery District—a cross between a tech incubator and WeWork—to build a platform for approved companies and researchers to access this data, dubbed Project Spark. The project, initiated by MaRS and Canada’s University Health Network, began exploring how to share this data after both organizations expressed interest to the government about giving broader health data access to researchers and companies looking to build healthcare-related tools.

Project Spark’s goal is to create an API, or a way for developers to request information from the government’s data cache. This could be used to create an app for doctors to access the full medical history of a new patient. Ontarians could access their health records at any time through similar software, and catalog health issues as they occur. Or researchers, like the ones trying to build AI to assist doctors, could request a different level of access that provides anonymized data on Ontarians who meet certain criteria. If you wanted to study every Ontarian who had Alzheimer’s disease over the last 40 years, that data would only be authorization and a few lines of code away.

There are currently 100 companies lined up to get access to data, comprised of health records from Ontario’s 14 million residents. (MaRS won’t say who the companies are). …(More)”

AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.

Big Data and AI – A transformational shift for government: So, what next for research?


Irina Pencheva, Marc Esteve and Slava Jenkin Mikhaylov in Public Policy and Administration: “Big Data and artificial intelligence will have a profound transformational impact on governments around the world. Thus, it is important for scholars to provide a useful analysis on the topic to public managers and policymakers. This study offers an in-depth review of the Policy and Administration literature on the role of Big Data and advanced analytics in the public sector. It provides an overview of the key themes in the research field, namely the application and benefits of Big Data throughout the policy process, and challenges to its adoption and the resulting implications for the public sector. It is argued that research on the subject is still nascent and more should be done to ensure that the theory adds real value to practitioners. A critical assessment of the strengths and limitations of the existing literature is developed, and a future research agenda to address these gaps and enrich our understanding of the topic is proposed…(More)”.

Our Infant Information Revolution


Joseph Nye at Project Syndicate: “…When people are overwhelmed by the volume of information confronting them, it is hard to know what to focus on. Attention, not information, becomes the scarce resource. The soft power of attraction becomes an even more vital power resource than in the past, but so does the hard, sharp power of information warfare. And as reputation becomes more vital, political struggles over the creation and destruction of credibility multiply. Information that appears to be propaganda may not only be scorned, but may also prove counterproductive if it undermines a country’s reputation for credibility.

During the Iraq War, for example, the treatment of prisoners at Abu Ghraib and Guantanamo Bay in a manner inconsistent with America’s declared values led to perceptions of hypocrisy that could not be reversed by broadcasting images of Muslims living well in America. Similarly, President Donald Trump’s tweets that prove to be demonstrably false undercut American credibility and reduce its soft power.

The effectiveness of public diplomacy is judged by the number of minds changed (as measured by interviews or polls), not dollars spent. It is interesting to note that polls and the Portland index of the Soft Power 30show a decline in American soft power since the beginning of the Trump administration. Tweets can help to set the global agenda, but they do not produce soft power if they are not credible.

Now the rapidly advancing technology of artificial intelligence or machine learning is accelerating all of these processes. Robotic messages are often difficult to detect. But it remains to be seen whether credibility and a compelling narrative can be fully automated….(More)”.

Data Protection and e-Privacy: From Spam and Cookies to Big Data, Machine Learning and Profiling


Chapter by Lilian Edwards in L Edwards ed Law, Policy and the Internet (Hart , 2018): “In this chapter, I examine in detail how data subjects are tracked, profiled and targeted by their activities on line and, increasingly, in the “offline” world as well. Tracking is part of both commercial and state surveillance, but in this chapter I concentrate on the former. The European law relating to spam, cookies, online behavioural advertising (OBA), machine learning (ML) and the Internet of Things (IoT) is examined in detail, using both the GDPR and the forthcoming draft ePrivacy Regulation. The chapter concludes by examining both code and law solutions which might find a way forward to protect user privacy and still enable innovation, by looking to paradigms not based around consent, and less likely to rely on a “transparency fallacy”. Particular attention is drawn to the new work around Personal Data Containers (PDCs) and distributed ML analytics….(More)”.

On Preferring A to B, While Also Preferring B to A


Paper by Cass R. Sunstein: “In important contexts, people prefer option A to option B when they evaluate the two separately, but prefer option B to option A when they evaluate the two jointly. In consumer behavior, politics, and law, such preference reversals present serious puzzles about rationality and behavioral biases.

They are often a product of the pervasive problem of “evaluability.” Some important characteristics of options are difficult or impossible to assess in separate evaluation, and hence choosers disregard or downplay them; those characteristics are much easier to assess in joint evaluation, where they might be decisive. But in joint evaluation, certain characteristics of options may receive excessive weight, because they do not much affect people’s actual experience or because the particular contrast between joint options distorts people’s judgments. In joint as well as separate evaluation, people are subject to manipulation, though for different reasons.

It follows that neither mode of evaluation is reliable. The appropriate approach will vary depending on the goal of the task – increasing consumer welfare, preventing discrimination, achieving optimal deterrence, or something else. Under appropriate circumstances, global evaluation would be much better, but it is often not feasible. These conclusions bear on preference reversals in law and policy, where joint evaluation is often better, but where separate evaluation might ensure that certain characteristics or features of situations do not receive excessive weight…(More)”.

Why Do We Care So Much About Privacy?


Louis Menand in The New Yorker: “…Possibly the discussion is using the wrong vocabulary. “Privacy” is an odd name for the good that is being threatened by commercial exploitation and state surveillance. Privacy implies “It’s nobody’s business,” and that is not really what Roe v. Wade is about, or what the E.U. regulations are about, or even what Katz and Carpenter are about. The real issue is the one that Pollak and Martin, in their suit against the District of Columbia in the Muzak case, said it was: liberty. This means the freedom to choose what to do with your body, or who can see your personal information, or who can monitor your movements and record your calls—who gets to surveil your life and on what grounds.

As we are learning, the danger of data collection by online companies is not that they will use it to try to sell you stuff. The danger is that that information can so easily fall into the hands of parties whose motives are much less benign. A government, for example. A typical reaction to worries about the police listening to your phone conversations is the one Gary Hart had when it was suggested that reporters might tail him to see if he was having affairs: “You’d be bored.” They were not, as it turned out. We all may underestimate our susceptibility to persecution. “We were just talking about hardwood floors!” we say. But authorities who feel emboldened by the promise of a Presidential pardon or by a Justice Department that looks the other way may feel less inhibited about invading the spaces of people who belong to groups that the government has singled out as unpatriotic or undesirable. And we now have a government that does that….(More)”.

Civic Tech: Making Technology Work for People


Book by Andrew Schrock: “The term “Civic Tech” has gained international recognition as a way to unite communities and government through technology design. But what does it mean for our shared future? In this book, Andrew Schrock cuts through the hype by telling stories of the people and ideas driving the movement. He argues that Civic Tech emerged in response to inequality and persistent social problems. The collaborative approaches and early successes of “techies” may not be easy solutions, but they exemplify a powerful political alternative. Civic Tech draws our attention to the challenges of public ownership and democratizing technology design—vital goals for the years ahead….(More)”.