Why these scientists devote time to editing and updating Wikipedia


Article by Christine Ro: “…A 2018 survey of more than 4,000 Wikipedians (as the site’s editors are called) found that 12% had a doctorate. Scientists made up one-third of the Wikimedia Foundation’s 16 trustees, according to Doronina.

Although Wikipedia is the best-known project under the Wikimedia umbrella, there are other ways for scientists to contribute besides editing Wikipedia pages. For example, an entomologist could upload photos of little-known insect species to Wikimedia Commons, a collection of images and other media. A computer scientist could add a self-published book to the digital textbook site Wikibooks. Or a linguist could explain etymology on the collaborative dictionary Wiktionary. All of these are open access, a key part of Wikimedia’s mission.

Although Wikipedia’s structure might seem daunting for new editors, there are parallels with academic documents.

For instance, Jess Wade, a physicist at Imperial College London, who focuses on creating and improving biographies of female scientists and scientists from low- and middle-income countries, says that the talk page, which is the behind-the-scenes portion of a Wikipedia page on which editors discuss how to improve it, is almost like the peer-review file of an academic paper…However, scientists have their own biases about aspects such as how to classify certain topics. This matters, Harrison says, because “Wikipedia is intended to be a general-purpose encyclopaedia instead of a scientific encyclopaedia.”

One example is a long-standing battle over Wikipedia pages on cryptids and folklore creatures such as Bigfoot. Labels such as ‘pseudoscience’ have angered cryptid enthusiasts and raised questions about different types of knowledge. One suggestion is for the pages to feature a disclaimer that says that a topic is not accepted by mainstream science.

Wade raises a point about resourcing, saying it’s especially difficult for the platform to retain academics who might be enthusiastic about editing Wikipedia initially, but then drop off. One reason is time. For full-time researchers, Wikipedia editing could be an activity best left to evenings, weekends and holidays…(More)”.

Social Informatics


Book edited by Noriko Hara, and Pnina Fichman: “Social informatics examines how society is influenced by digital technologies and how digital technologies are shaped by political, economic, and socio-cultural forces. The chapters in this edited volume use social informatics approaches to analyze recent issues in our increasingly data-intensive society.

Taking a social informatics perspective, this edited volume investigates the interaction between society and digital technologies and includes research that examines individuals, groups, organizations, and nations, as well as their complex relationships with pervasive mobile and wearable devices, social media platforms, artificial intelligence, and big data. This volume’s contributors range from seasoned and renowned researchers to upcoming researchers in social informatics. The readers of the book will understand theoretical frameworks of social informatics; gain insights into recent empirical studies of social informatics in specific areas such as big data and its effects on privacy, ethical issues related to digital technologies, and the implications of digital technologies for daily practices; and learn how the social informatics perspective informs research and practice…(More)”.

Randomize NIH grant giving


Article by Vinay Prasad: “A pause in NIH study sections has been met with fear and anxiety from researchers. At many universities, including mine, professors live on soft money. No grants? If you are assistant professor, you can be asked to pack your desk. If you are a full professor, the university slowly cuts your pay until you see yourself out. Everyone talks about you afterwards, calling you a failed researcher. They laugh, a little too long, and then blink back tears as they wonder if they are next. Of course, your salary doubles in the new job and you are happier, but you are still bitter and gossiped about.

In order to apply for NIH grants, you have to write a lot of bullshit. You write specific aims and methods, collect bios from faculty and more. There is a section where you talk about how great your department and team is— this is the pinnacle of the proverbial expression, ‘to polish a turd.’ You invite people to work on your grant if they have a lot of papers or grants or both, and they agree to be on your grant even though they don’t want to talk to you ever again.

You submit your grant and they hire someone to handle your section. They find three people to review it. Ideally, they pick people who have no idea what you are doing or why it is important, and are not as successful as you, so they can hate read your proposal. If, despite that, they give you a good score, you might be discussed at study section.

The study section assembles scientists to discuss your grant. As kids who were picked last in kindergarten basketball, they focus on the minutiae. They love to nitpick small things. If someone on study section doesn’t like you, they can tank you. In contrast, if someone loves you, they can’t really single handedly fund you.

You might wonder if study section leaders are the best scientists. Rest assured. They aren’t. They are typically mid career, mediocre scientists. (This is not just a joke, data support this claim see www.drvinayprasad.com). They rarely have written extremely influential papers.

Finally, your proposal gets a percentile score. Here is the chance of funding by percentile. You might get a chance to revise your grant if you just fall short….Given that the current system is onerous and likely flawed, you would imagine that NIH leadership has repeatedly tested whether the current method is superior than say a modified lottery, aka having an initial screen and then randomly giving out the money.

Of course not. Self important people giving out someone else’s money rarely study their own processes. If study sections are no better than lottery, that would mean a lot of NIH study section officers would no longer need to work hard from home half the day, freeing up money for one more grant.

Let’s say we take $200 million and randomize it. Half of it is allocated to being given out in the traditional method, and the other half is allocated to a modified lottery. If an application is from a US University and passes a minimum screen, it is enrolled in the lottery.

Then we follow these two arms into the future. We measure publications, citations, h index, the average impact factor of journals in which the papers are published, and more. We even take a subset of the projects and blind reviewers to score the output. Can they tell which came from study section?…(More)”.

The Impact of Artificial Intelligence on Societies


Book edited by Christian Montag and Raian Ali: “This book presents a recent framework proposed to understand how attitudes towards artificial intelligence are formed. It describes how the interplay between different variables, such as the modality of AI interaction, the user personality and culture, the type of AI applications (e.g. in the realm of education, medicine, transportation, among others), and the transparency and explainability of AI systems contributes to understand how user’s acceptance or a negative attitude towards AI develops. Gathering chapters from leading researchers with different backgrounds, this book offers a timely snapshot on factors that will be influencing the impact of artificial intelligence on societies…(More)”.

Developing a public-interest training commons of books


Article by Authors Alliance: “…is pleased to announce a new project, supported by the Mellon Foundation, to develop an actionable plan for a public-interest book training commons for artificial intelligence. Northeastern University Library will be supporting this project and helping to coordinate its progress.

Access to books will play an essential role in how artificial intelligence develops. AI’s Large Language Models (LLMs) have a voracious appetite for text, and there are good reasons to think that these data sets should include books and lots of them. Over the last 500 years, human authors have written over 129 million books. These volumes, preserved for future generations in some of our most treasured research libraries, are perhaps the best and most sophisticated reflection of all human thinking. Their high editorial quality, breadth, and diversity of content, as well as the unique way they employ long-form narratives to communicate sophisticated and nuanced arguments and ideas make them ideal training data sources for AI.

These collections and the text embedded in them should be made available under ethical and fair rules as the raw material that will enable the computationally intense analysis needed to inform new AI models, algorithms, and applications imagined by a wide range of organizations and individuals for the benefit of humanity…(More)”

Un-Plateauing Corruption Research?Perhaps less necessary, but more exciting than one might think


Article by Dieter Zinnbauer: “There is a sense in the anti-corruption research community that we may have reached some plateau (or less politely, hit a wall). This article argues – at least partly – against this claim.

We may have reached a plateau with regard to some recurring (staid?) scholarly and policy debates that resurface with eerie regularity, tend to suck all oxygen out of the room, yet remain essentially unsettled and irresolvable. Questions aimed at arriving closure on what constitutes corruption, passing authoritative judgements  on what works and what does not and rather grand pronouncements on whether progress has or has not been all fall into this category.

 At the same time, there is exciting work often in unexpected places outside the inner ward of the anti-corruption castle,  contributing new approaches and fresh-ish insights and there are promising leads for exciting research on the horizon. Such areas include the underappreciated idiosyncrasies of corruption in the form of inaction rather than action, the use of satellites and remote sensing techniques to better understand and measure corruption, the overlooked role of short-sellers in tackling complex forms of corporate corruption and the growing phenomena of integrity capture, the anti-corruption apparatus co-opted for sinister, corrupt purposes.

These are just four examples of the colourful opportunity tapestry for (anti)corruption research moving forward, not in form of a great unified project and overarching new idea  but as little stabs of potentiality here and  there and somewhere else surprisingly unbeknownst…(More)”

Wikenigma – an Encyclopedia of Unknowns


About: “Wikenigma is a unique wiki-based resource specifically dedicated to documenting fundamental gaps in human knowledge.

Listing scientific and academic questions to which no-one, anywhere, has yet been able to provide a definitive answer. [ 1141 so far ]

That’s to say, a compendium of so-called ‘Known Unknowns’.

The idea is to inspire and promote interest in scientific and academic research by highlighting opportunities to investigate problems which no-one has yet been able to solve.

You can start browsing the content via the main menu on the left (or in the ‘Main Menu’ section if you’re using a small-screen device) Alternatively, the search box (above right) will find any articles with details that match your search terms…(More)”.

How and When to Involve Crowds in Scientific Research


Book by Marion K. Poetz and Henry Sauermann: “This book explores how millions of people can significantly contribute to scientific research with their effort and experience, even if they are not working at scientific institutions and may not have formal scientific training. 

Drawing on a strong foundation of scholarship on crowd involvement, this book helps researchers recognize and understand the benefits and challenges of crowd involvement across key stages of the scientific process. Designed as a practical toolkit, it enables scientists to critically assess the potential of crowd participation, determine when it can be most effective, and implement it to achieve meaningful scientific and societal outcomes.

The book also discusses how recent developments in artificial intelligence (AI) shape the role of crowds in scientific research and can enhance the effectiveness of crowd science projects…(More)”

Boosting: Empowering Citizens with Behavioral Science


Paper by Stefan M. Herzog and Ralph Hertwig: “…Behavioral public policy came to the fore with the introduction of nudging, which aims to steer behavior while maintaining freedom of choice. Responding to critiques of nudging (e.g., that it does not promote agency and relies on benevolent choice architects), other behavioral policy approaches focus on empowering citizens. Here we review boosting, a behavioral policy approach that aims to foster people’s agency, self-control, and ability to make informed decisions. It is grounded in evidence from behavioral science showing that human decision making is not as notoriously flawed as the nudging approach assumes. We argue that addressing the challenges of our time—such as climate change, pandemics, and the threats to liberal democracies and human autonomy posed by digital technologies and choice architectures—calls for fostering capable and engaged citizens as a first line of response to complement slower, systemic approaches…(More)”.

AI for Social Good


Essay by Iqbal Dhaliwal: “Artificial intelligence (AI) has the potential to transform our lives. Like the internet, it’s a general-purpose technology that spans sectors, is widely accessible, has a low marginal cost of adding users, and is constantly improving. Tech companies are rapidly deploying more capable AI models that are seeping into our personal lives and work.

AI is also swiftly penetrating the social sector. Governments, social enterprises, and NGOs are infusing AI into programs, while public treasuries and donors are working hard to understand where to invest. For example, AI is being deployed to improve health diagnostics, map flood-prone areas for better relief targeting, grade students’ essays to free up teachers’ time for student interaction, assist governments in detecting tax fraud, and enable agricultural extension workers to customize advice.

But the social sector is also rife with examples over the past two decades of technologies touted as silver bullets that fell short of expectations, including One Laptop Per ChildSMS reminders to take medication, and smokeless stoves to reduce indoor air pollution. To avoid a similar fate, AI-infused programs must incorporate insights from years of evidence generated by rigorous impact evaluations and be scaled in an informed way through concurrent evaluations.

Specifically, implementers of such programs must pay attention to three elements. First, they must use research insights on where AI is likely to have the greatest social impact. Decades of research using randomized controlled trials and other exacting empirical work provide us with insights across sectors on where and how AI can play the most effective role in social programs.

Second, they must incorporate research lessons on how to effectively infuse AI into existing social programs. We have decades of research on when and why technologies succeed or fail in the social sector that can help guide AI adopters (governments, social enterprises, NGOs), tech companies, and donors to avoid pitfalls and design effective programs that work in the field.

Third, we must promote the rigorous evaluation of AI in the social sector so that we disseminate trustworthy information about what works and what does not. We must motivate adopters, tech companies, and donors to conduct independent, rigorous, concurrent impact evaluations of promising AI applications across social sectors (including impact on workers themselves); draw insights emerging across multiple studies; and disseminate those insights widely so that the benefits of AI can be maximized and its harms understood and minimized. Taking these steps can also help build trust in AI among social sector players and program participants more broadly…(More)”.