Open-access reformers launch next bold publishing plan


Article by Layal Liverpool: “The group behind the radical open-access initiative Plan S has announced its next big plan to shake up research publishing — and this one could be bolder than the first. It wants all versions of an article and its associated peer-review reports to be published openly from the outset, without authors paying any fees, and for authors, rather than publishers, to decide when and where to first publish their work.

The group of influential funding agencies, called cOAlition S, has over the past five years already caused upheaval in the scholarly publishing world by pressuring more journals to allow immediate open-access publishing. Its new proposal, prepared by a working group of publishing specialists and released on 31 October, puts forward an even broader transformation in the dissemination of research.

It outlines a future “community-based” and “scholar-led” open-research communication system (see go.nature.com/45zyjh) in which publishers are no longer gatekeepers that reject submitted work or determine first publication dates. Instead, authors would decide when and where to publish the initial accounts of their findings, both before and after peer review. Publishers would become service providers, paid to conduct processes such as copy-editing, typesetting and handling manuscript submissions…(More)”.

Choosing AI’s Impact on the Future of Work 


Article by Daron Acemoglu & Simon Johnson  …“Too many commentators see the path of technology as inevitable. But the historical record is clear: technologies develop according to the vision and choices of those in positions of power. As we document in Power and Progress: Our 1,000-Year Struggle over Technology and Prosperity, when these choices are left entirely in the hands of a small elite, you should expect that group to receive most of the benefits, while everyone else bears the costs—potentially for a long time.

Rapid advances in AI threaten to eliminate many jobs, and not just those of writers and actors. Jobs with routine elements, such as in regulatory compliance or clerical work, and those that involve simple data collection, data summary, and writing tasks are likely to disappear.

But there are still two distinct paths that this AI revolution could take. One is the path of automation, based on the idea that AI’s role is to perform tasks as well as or better than people. Currently, this vision dominates in the US tech sector, where Microsoft and Google (and their ecosystems) are cranking hard to create new AI applications that can take over as many human tasks as possible.

The negative impact on people along the “just automate” path is easy to predict from prior waves of digital technologies and robotics. It was these earlier forms of automation that contributed to the decline of American manufacturing employment and the huge increase in inequality over the last four decades. If AI intensifies automation, we are very likely to get more of the same—a gap between capital and labor, more inequality between the professional class and the rest of the workers, and fewer good jobs in the economy….(More)”

Can Indigenous knowledge and Western science work together? New center bets yes


Article by Jeffrey Mervis: “For millennia, the Passamaquoddy people used their intimate understanding of the coastal waters along the Gulf of Maine to sustainably harvest the ocean’s bounty. Anthropologist Darren Ranco of the University of Maine hoped to blend their knowledge of tides, water temperatures, salinity, and more with a Western approach in a project to study the impact of coastal pollution on fish, shellfish, and beaches.

But the Passamaquoddy were never really given a seat at the table, says Ranco, a member of the Penobscot Nation, which along with the Passamaquoddy are part of the Wabanaki Confederacy of tribes in Maine and eastern Canada. The Passamaquoddy thought water quality and environmental protection should be top priority; the state emphasized forecasting models and monitoring. “There was a disconnect over who were the decision-makers, what knowledge would be used in making decisions, and what participation should look like,” Ranco says about the 3-year project, begun in 2015 and funded by the National Science Foundation (NSF).

Last month, NSF aimed to bridge such disconnects, with a 5-year, $30 million grant designed to weave together traditional ecological knowledge (TEK) and Western science. Based at the University of Massachusetts (UMass) Amherst, the Center for Braiding Indigenous Knowledges and Science (CBIKS) aims to fundamentally change the way scholars from both traditions select and carry out joint research projects and manage data…(More)”.

Urban Development and the State of Open Data


Chapter by Stefaan G. Verhulst and Sampriti Saxena: “Nearly 4.4 billion people, or about 55% of the world’s population, lived in cities in 2018. By 2045, this number is anticipated to grow to 6 billion. Such level of growth requires innovative and targeted urban solutions. By more effectively leveraging open data, cities can meet the needs of an ever-growing population in an effective and sustainable manner. This paper updates the previous contribution by Jean-Noé Landry, titled “Open Data and Urban Development” in the 2019 edition of The State of Open Data. It also aims to contribute to a further deepening of the Third Wave of Open Data, which highlights the significance of open data at the subnational level as a more direct and immediate response to the on-the-ground needs of citizens. It considers recent developments in how the use of, and approach to, open data has evolved within an urban development context. It seeks to discuss emerging applications of open data in cities, recent developments in open data infrastructure, governance and policies related to open data, and the future outlook of the role of open data in urbanization…(More)”.

The Future of AI Is GOMA


Article by Matteo Wong: “A slate of four AI companies might soon rule Silicon Valley…Chatbots and their ilk are still in their early stages, but everything in the world of AI is already converging around just four companies. You could refer to them by the acronym GOMA: Google, OpenAI, Microsoft, and Anthropic. Shortly after OpenAI released ChatGPT last year, Microsoft poured $10 billion into the start-up and shoved OpenAI-based chatbots into its search engine, Bing. Not to be outdone, Google announced that more AI features were coming to SearchMaps, Docs, and more, and introduced Bard, its own rival chatbot. Microsoft and Google are now in a race to integrate generative AI into just about everything. Meanwhile, Anthropic, a start-up launched by former OpenAI employees, has raised billions of dollars in its own right, including from Google. Companies such as Slack, Expedia, Khan Academy, Salesforce, and Bain are integrating ChatGPT into their products; many others are using Anthropic’s chatbot, Claude. Executives from GOMA have also met with leaders and officials around the world to shape the future of AI’s deployment and regulation. The four have overlapping but separate proposals for AI safety and regulation, but they have joined together to create the Frontier Model Forum, a consortium whose stated mission is to protect against the supposed world-ending dangers posed by terrifyingly capable models that do not yet exist but, it warns, are right around the corner. That existential language—about bioweapons and nuclear robots—has since migrated its way into all sorts of government proposals and language. If AI is truly reshaping the world, these companies are the sculptors…”…(More)”.

Europe wants to get better at planning for the worst


Article by Sarah Anne Aarup: “The European Union is beset by doom and gloom — from wars on its doorstep to inflation and the climate crisis — not to mention political instability in the U.S. and rivalry with China.

All too often, the EU has been overtaken by events, which makes the task of getting better at planning for the worst all the more pressing. 

As European leaders fought political fires at their informal summit last week in Granada, unaware that Palestinian militants would launch their devastating raid on Israel a day later, they quietly started a debate on strategic foresight.

At this stage still very much a thought experiment, the concept of “open strategic autonomy” is being championed by host Spain, the current president of the Council of the EU. The idea reflects a shift in priorities to navigate an increasingly uncertain world, and a departure from the green and digital transitions that have dominated the agenda in recent years.

To the uninitiated, the concept of open strategic autonomy sounds like an oxymoron — that’s because it is.

After the hyper globalized early 2000s, trust in liberalism started to erode. Then the Trump-era trade wars, COVID-19 pandemic and Russia’s invasion of Ukraine exposed Europe’s economic reliance on powerful nations that are either latent — or overt — strategic rivals.

“The United States and China are becoming more self-reliant, and some voices were saying that this is what we have to do,” an official with the Spanish presidency told POLITICO. “But that’s not a good idea for Europe.”

Instead, open strategic autonomy is about shielding the EU just enough to protect its economic security while remaining an international player. In other words, it means “cooperating multilaterally wherever we can, acting autonomously wherever we must.”

It’s a grudging acceptance that great power politics now dominate economics…

The open strategic autonomy push is about countering an inward turn that was all about cutting dependencies, such as the EU’s reliance on Russian energy, after President Vladimir Putin ordered the invasion of Ukraine.

“[We’re] missing a more balanced and forward-looking strategy” following the Versailles Declaration, the Spanish official said, referring to a first response by EU leaders to the Russian attack of February 24, 2022.

Spain delivered its contribution to the debate in the form of a thick paper drafted by its foresight office, in coordination with over 80 ministries across the EU…(More)”.

Transparent. A phony-baloney ideal.


Essay by Wilfred M. McCla: ““I’m looking through you,” sang Paul McCartney, “where did you go?”

Ah, yes. People of a certain age will recognize these lyrics from a bittersweet song of the sixties about the optics of fading love. (Poor Jane Asher, where did she go?) But more than that, the song also gives us a neat summation of what might be called, with apologies to Kant, the antinomies of pure transparency.

Let me explain. I am sure you have noticed that the adjective transparent has undergone an overhaul in recent years. For one thing, it is suddenly everywhere. It used to be employed narrowly, mainly to describe the neutral quality we expect to find in a window: the capacity to allow the unhindered passage of light. Or as the Oxford English Dictionary puts it, “the property of transmitting light, so as to render bodies lying beyond completely visible.” The point was not the window, but the thing the window enabled us to see.

The word has also enjoyed figurative usages, as in the beauty of the “transparent Helena” of A Midsummer Night’s Dream, or in George Orwell’s admonition that “good prose should be transparent, like a window pane.” Or in the ecstatic visions of Ralph Waldo Emerson, who experienced unmediated nature as if he were “a transparent eye-ball,” able to “see all” and feel “the currents of the Universal Being circulate through me.” Or less grandly, the word is often used as a negative intensifier, as in the term “transparent liar,” which is used so frequently that it has a Twitter hashtag. In every instance, the general sense of being “completely visible” is paramount.

In recent years, by contrast, transparent has become one of the staples of our commercial discourse, a form of bureaucratic-corporate-therapeutic-speak that, like all such language, is designed to conceal more than it reveals and defeat its challengers by the abstract elusiveness of its meaning. Its promiscuous use is an unfortunate development. In practice, it generally means the opposite of what it promises; transparency would mean irreproachable openness, guilelessness, simplicity, “nothing to hide.” But when today’s T-shirt–clad executives and open-collar politicians assure us, at the beginning of their remarks, that “we want to be completely transparent,” it is time to watch out. They are making a statement about themselves, about what good and generous and open and kind folks they are, and why you should therefore trust them. They are signaling their personal virtue. They are not talking about the general accessibility of their account books and board minutes and confidential personnel records…(More)”.

AI-tocracy


Article by Peter Dizikes: “It’s often believed that authoritarian governments resist technical innovation in a way that ultimately weakens them both politically and economically. But a more complicated story emerges from a new study on how China has embraced AI-driven facial recognition as a tool of repression. 

“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI,” says coauthor Martin Beraja, an MIT economist. Not only has use of the technology apparently worked to suppress dissent, but it has spurred software development. The scholars call this mutually reinforcing situation an “AI-tocracy.” 

In fact, they found, firms that were granted a government contract for facial-recognition technologies produce about 49% more software products in the two years after gaining the contract than before. “We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Adding it all up, the case of China indicates how autocratic governments can potentially find their political power enhanced, rather than upended, when they harness technological advances—and even generate more economic growth than they would have otherwise…(More)”.

Citizens’ Assemblies Are Upgrading Democracy: Fair Algorithms Are Part of the Program


Article by Ariel Procaccia: “…Taken together, these assemblies have demonstrated an impressive capacity to uncover the will of the people and build consensus.

The effectiveness of citizens’ assemblies isn’t surprising. Have you ever noticed how politicians grow a spine the moment they decide not to run for reelection? Well, a citizens’ assembly is a bit like a legislature whose members make a pact barring them from seeking another term in office. The randomly selected members are not beholden to party machinations or outside interests; they are free to speak their mind and vote their conscience.

What’s more, unlike elected bodies, these assemblies are chosen to mirror the population, a property that political theorists refer to as descriptive representation. For example, a typical citizens’ assembly has a roughly equal number of men and women (some also ensure nonbinary participation), whereas the average proportion of seats held by women in national parliaments worldwide was 26 percent in 2021—a marked increase from 12 percent in 1997 but still far from gender balance. Descriptive representation, in turn, lends legitimacy to the assembly: citizens seem to find decisions more acceptable when they are made by people like themselves.

As attractive as descriptive representation is, there are practical obstacles to realizing it while adhering to the principle of random selection. Overcoming these hurdles has been a passion of mine for the past few years. Using tools from mathematics and computer science, my collaborators and I developed an algorithm for the selection of citizens’ assemblies that many practitioners around the world are using. Its story provides a glimpse into the future of democracy—and it begins a long time ago…(More)”.

How to share data — not just equally, but equitably


Editorial in Nature: “Two decades ago, scientists asked more than 150,000 people living in Mexico City to provide medical data for research. Each participant gave time, blood and details of their medical history. For the researchers, who were based at the National Autonomous University of Mexico in Mexico City and the University of Oxford, UK, this was an opportunity to study a Latin American population for clues about factors contributing to disease and health. For the participants, it was a chance to contribute to science so that future generations might one day benefit from access to improved health care. Ultimately, the Mexico City Prospective Study was an exercise in trust — scientists were trusted with some of people’s most private information because they promised to use it responsibly.

Over the years, the researchers have repaid the communities through studies investigating the effects of tobacco and other risk factors on participants’ health. They have used the data to learn about the impact of diabetes on mortality rates, and they have found that rare forms of a gene called GPR75 lower the risk of obesity. And on 11 October, researchers added to the body of knowledge on the population’s ancestry.

But this project also has broader relevance — it can be seen as a model of trust and of how the power structures of science can be changed to benefit the communities closest to it.

Mexico’s population is genetically wealthy. With a complex history of migration and mixing of several populations, the country’s diverse genetic resources are valuable to the study of the genetic roots of diseases. Most genetic databases are stocked with data from people with European ancestry. If genomics is to genuinely benefit the global community — and especially under-represented groups — appropriately diverse data sets are needed. These will improve the accuracy of genetic tests, such as those for disease risk, and will make it easier to unearth potential drug targets by finding new genetic links to medical conditions…(More)”.