How Artificial Intelligence Could Increase the Risk of Nuclear War


Rand Corporation: “The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world’s major nuclear powers. It’s not the killer robots of Hollywood blockbusters that we need to worry about; it’s how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That’s the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It’s part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

“This isn’t just a movie scenario,” said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. “Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.”…(More)”.

Using Data to Inform the Science of Broadening Participation


Donna K. Ginther at the American Behavioral Scientist: “In this article, I describe how data and econometric methods can be used to study the science of broadening participation. I start by showing that theory can be used to structure the approach to using data to investigate gender and race/ethnicity differences in career outcomes. I also illustrate this process by examining whether women of color who apply for National Institutes of Health research funding are confronted with a double bind where race and gender compound their disadvantage relative to Whites. Although high-quality data are needed for understanding the barriers to broadening participation in science careers, it cannot fully explain why women and underrepresented minorities are less likely to be scientists or have less productive science careers. As researchers, it is important to use all forms of data—quantitative, experimental, and qualitative—to deepen our understanding of the barriers to broadening participation….(More)”.

Use our personal data for the common good


Hetan Shah at Nature: “Data science brings enormous potential for good — for example, to improve the delivery of public services, and even to track and fight modern slavery. No wonder researchers around the world — including members of my own organization, the Royal Statistical Society in London — have had their heads in their hands over headlines about how Facebook and the data-analytics company Cambridge Analytica might have handled personal data. We know that trustworthiness underpins public support for data innovation, and we have just seen what happens when that trust is lost….But how else might we ensure the use of data for the public good rather than for purely private gain?

Here are two proposals towards this goal.

First, governments should pass legislation to allow national statistical offices to gain anonymized access to large private-sector data sets under openly specified conditions. This provision was part of the United Kingdom’s Digital Economy Act last year and will improve the ability of the UK Office for National Statistics to assess the economy and society for the public interest.

My second proposal is inspired by the legacy of John Sulston, who died earlier this month. Sulston was known for his success in advocating for the Human Genome Project to be openly accessible to the science community, while a competitor sought to sequence the genome first and keep data proprietary.

Like Sulston, we should look for ways of making data available for the common interest. Intellectual-property rights expire after a fixed time period: what if, similarly, technology companies were allowed to use the data that they gather only for a limited period, say, five years? The data could then revert to a national charitable corporation that could provide access to certified researchers, who would both be held to account and be subject to scrutiny that ensure the data are used for the common good.

Technology companies would move from being data owners to becoming data stewards…(More)” (see also http://datacollaboratives.org/).

Obfuscating with transparency


“These approaches…limit the impact of valuable information in developing policies…”

Under the new policy, studies that do not fully meet transparency criteria would be excluded from use in EPA policy development. This proposal follows unsuccessful attempts to enact the Honest and Open New EPA Science Treatment (HONEST) Act and its predecessor, the Secret Science Reform Act. These approaches undervalue many scientific publications and limit the impact of valuable information in developing policies in the areas that the EPA regulates….In developing effective policies, earnest evaluations of facts and fair-minded assessments of the associated uncertainties are foundational. Policy discussions require an assessment of the likelihood that a particular observation is true and examinations of the short- and long-term consequences of potential actions or inactions, including a wide range of different sorts of costs. Those with training in making these judgments with access to as much relevant information as possible are crucial for this process. Of course, policy development requires considerations other than those related to science. Such discussions should follow clear assessment after access to all of the available evidence. The scientific enterprise should stand up against efforts that distort initiatives aimed to improve scientific practice, just to pursue other agendas…(More)”.

Literature review on collective intelligence: a crowd science perspective


Chao Yu in the International Journal of Crowd Science: “A group can be of more power and better wisdom than the sum of the individuals. Foreign scholars have noticed that for a long time and called it collective intelligence. It has emerged from the communication, collaboration, competition and brain storming, etc. Collective intelligence appears in many fields such as public decisions, voting activities, social networks and crowdsourcing.

Crowd science mainly focuses on the basic principles and laws of the intelligent activities of groups under the new interconnection model. It explores how to give full play to the intelligence agents and groups, dig their potential to solve the problems that are difficult for a single agent.

In this paper, we present a literature review on collective intelligence in a crowd science perspective. We focus on researchers’ related work, especially that under which circumstance can group show their wisdom, how to measure it, how to optimize it and its modern or future applications in the digital world. That is exactly what the crowd science pays close attention to….(More)”.

What if a nuke goes off in Washington, D.C.? Simulations of artificial societies help planners cope with the unthinkable


Mitchell Waldrop at Science: “…The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.

That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. “Agent-based models are how you get all these pieces sorted out and look at the interactions,” Barrett says.

The downside is that models like NPS1 tend to be big—each of the model’s initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. “There’s a fundamental trade-off between the complexity of individual agents and the size of the simulation,” says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.

But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. “They’re the most flexible and detailed models out there,” says Ira Longini, who models epidemics at the University of Florida in Gainesville, “which makes them by far the most effective in understanding and directing policy.”

he roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn’t get underway until the mid-1990s….(More)”.

The citation graph is one of humankind’s most important intellectual achievements


Dario Taraborelli at BoingBoing: “When researchers write, we don’t just describe new findings — we place them in context by citing the work of others. Citations trace the lineage of ideas, connecting disparate lines of scholarship into a cohesive body of knowledge, and forming the basis of how we know what we know.

Today, citations are also a primary source of data. Funders and evaluation bodies use them to appraise scientific impact and decide which ideas are worth funding to support scientific progress. Because of this, data that forms the citation graph should belong to the public. The Initiative for Open Citations was created to achieve this goal.

Back in the 1950s, reference works like Shepard’s Citations provided lawyers with tools to reconstruct which relevant cases to cite in the context of a court trial. No such a tool existed at the time for identifying citations in scientific publications. Eugene Garfield — the pioneer of modern citation analysis and citation indexing — described the idea of extending this approach to science and engineering as his Eureka moment. Garfield’s first experimental Genetics Citation Index, compiled by the newly-formed Institute for Scientific Information (ISI) in 1961, offered a glimpse into what a full citation index could mean for science at large. It was distributed, for free, to 1,000 libraries and scientists in the United States.

Fast forward to the end of the 20th century. the Web of Science citation index — maintained by Thomson Reuters, who acquired ISI in 1992 — has become the canonical source for scientists, librarians, and funders to search scholarly citations, and for the field of scientometrics, to study the structure and evolution of scientific knowledge. ISI could have turned into a publicly funded initiative, but it started instead as a for-profit effort. In 2016, Thomson Reuters sold its Intellectual Property & Science business to a private-equity fund for $3.55 billion. Its citation index is now owned by Clarivate Analytics.

Raw citation data being non-copyrightable, it’s ironic that the vision of building a comprehensive index of scientific literature has turned into a billion-dollar business, with academic institutions paying cripplingly expensive annual subscriptions for access and the public locked out.

Enter the Initiative for Open Citations.

In 2016, a small group founded the Initiative for Open Citations (I4OC) as a voluntary effort to work with scholarly publishers — who routinely publish this data — to persuade them to release it in the open and promote its unrestricted availability. Before the launch of the I4OC, only 1% of indexed scholarly publications with references were making citation data available in the public domain. When the I4OC was officially announced in 2017, we were able to report that this number had shifted from 1% to 40%. In the main, this was thanks to the swift action of a small number of large academic publishers.

In April 2018, we are celebrating the first anniversary of the initiative. Since the launch, the fraction of indexed scientific articles with open citation data (as measured by Crossref) has surpassed 50% and the number of participating publishers has risen to 490Over half a billion references are now openly available to the public without any copyright restriction. Of the top-20 biggest publishers with citation data, all but 5 — Elsevier, IEEE, Wolters Kluwer Health, IOP Publishing, ACS — now make this data open via Crossref and its APIs. Over 50 organisations — including science funders, platforms and technology organizations, libraries, research and advocacy institutions — have joined us in this journey to help advocate and promote the reuse of open citations….(More)”.

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

Everything* You Always Wanted To Know About Blockchain (But Were Afraid To Ask)


Alice Meadows at the Scholarly Kitchen: “In this interview, Joris van Rossum (Director of Special Projects, Digital Science) and author of Blockchain for Research, and Martijn Roelandse (Head of Publishing Innovation, Springer Nature), discuss blockchain in scholarly communications, including the recently launched Peer Review Blockchain initiative….

How would you describe blockchain in one sentence?

Joris: Blockchain is a technology for decentralized, self-regulating data which can be managed and organized in a revolutionary new way: open, permanent, verified and shared, without the need of a central authority.

How does it work (in layman’s language!)?

Joris: In a regular database you need a gatekeeper to ensure that whatever is stored in a database (financial transactions, but this could be anything) is valid. However with blockchain, trust is not created by means of a curator, but through consensus mechanisms and cryptographic techniques. Consensus mechanisms clearly define what new information is allowed to be added to the datastore. With the help of a technology called hashing, it is not possible to change any existing data without this being detected by others. And through cryptography, the database can be shared without real identities being revealed. So the blockchain technology removes the need for a middle-man.

How is this relevant to scholarly communication?

Joris: It’s very relevant. We’ve explored the possibilities and initiatives in a report published by Digital Science. The blockchain could be applied on several levels, which is reflected in a number of initiatives announced recently. For example, a cryptocurrency for science could be developed. This ‘bitcoin for science’ could introduce a monetary reward scheme to researchers, such as for peer review. Another relevant area, specifically for publishers, is digital rights management. The potential for this was picked up by this blog at a very early stage. Blockchain also allows publishers to easily integrate micropayments, thereby creating a potentially interesting business model alongside open access and subscriptions.

Moreover, blockchain as a datastore with no central owner where information can be stored pseudonymously could support the creation of a shared and authoritative database of scientific events. Here traditional activities such as publications and citations could be stored, along with currently opaque and unrecognized activities, such as peer review. A data store incorporating all scientific events would make science more transparent and reproducible, and allow for more comprehensive and reliable metrics….

How do you see developments in the industry regarding blockchain?

Joris: In the last couple of months we’ve seen the launch of many interesting initiatives. For example scienceroot.comPluto.network, and orvium.io. These are all ambitious projects incorporating many of the potential applications of blockchain in the industry, and to an extent aim to disrupt the current ecosystem. Recently artifacts.ai was announced, an interesting initiative that aims to allow researchers to permanently document every stage of the research process. However, we believe that traditional players, and not least publishers, should also look at how services to researchers can be improved using blockchain technology. There are challenges (e.g. around reproducibility and peer review) but that does not necessarily mean the entire ecosystem needs to be overhauled. In fact, in academic publishing we have a good track record of incorporating new technologies and using them to improve our role in scholarly communication. In other words, we should fix the system, not break it!

What is the Peer Review Blockchain initiative, and why did you join?

Martijn: The problems of research reproducibility, recognition of reviewers, and the rising burden of the review process, as research volumes increase each year, have led to a challenging landscape for scholarly communications. There is an urgent need for change to tackle the problems which is why we joined this initiative, to be able to take a step forward towards a fairer and more transparent ecosystem for peer review. The initiative aims to look at practical solutions that leverage the distributed registry and smart contract elements of blockchain technologies. Each of the parties can deposit peer review activity in the blockchain — depending on peer review type, either partially or fully encrypted — and subsequent activity is also deposited in the reviewer’s ORCID profile. These business transactions — depositing peer review activity against person x — will be verifiable and auditable, thereby increasing transparency and reducing the risk of manipulation. Through the shared processes we will setup with other publishers, and recordkeeping, trust will increase.

A separate trend we see is the broadening scope of research evaluation which triggered researchers to also get (more) recognition for their peer review work, beyond citations and altmetrics. At a later stage new applications could be built on top of the peer review blockchain….(More)”.

The Scientific Paper Is Obsolete


James Somers in The Atlantic: “The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today?…(More).