Literature review on collective intelligence: a crowd science perspective


Chao Yu in the International Journal of Crowd Science: “A group can be of more power and better wisdom than the sum of the individuals. Foreign scholars have noticed that for a long time and called it collective intelligence. It has emerged from the communication, collaboration, competition and brain storming, etc. Collective intelligence appears in many fields such as public decisions, voting activities, social networks and crowdsourcing.

Crowd science mainly focuses on the basic principles and laws of the intelligent activities of groups under the new interconnection model. It explores how to give full play to the intelligence agents and groups, dig their potential to solve the problems that are difficult for a single agent.

In this paper, we present a literature review on collective intelligence in a crowd science perspective. We focus on researchers’ related work, especially that under which circumstance can group show their wisdom, how to measure it, how to optimize it and its modern or future applications in the digital world. That is exactly what the crowd science pays close attention to….(More)”.

Use of data & technology for promoting waste sector accountability in Nepal


Saroj Bista at YoungInnovations: “All the Nepalese people are saddened to see waste abandoned in the Capital, Kathmandu. Among them, many are concerned to find solutions to such a problem, including Kathmandu City. A 2015 report stated that Kathmandu Metropolitan City (KMC) alone receives 525 tonnes of waste in a day while it manages to collect 516 tonnes out if it, meaning that 8 tonnes of waste are left/abandoned….

Despite many stakeholders including the government sector, non-governmental organizations, private sectors have been working to address the problem associated with solid waste mapping in urban sector, the problem continued to exist.

YoungInnovations and Clean Up Nepal came together to discuss if we could tackle this problemWe discussed if keeping track of everybody’s efforts as well as noticing every piece of waste in the city raises accountability of stakeholders adds a value. YoungInnovations has over a decade of experience in developing data and evidence-based tech solutions to problem. Clean Up Nepal is a civil society organization working to provide an enabling environment to improve solid waste management and water, sanitation and hygiene in Nepal by working closely with local communities and relevant stakeholders. In this, both the organizations agreed to work mixing the expertise of each other to offer the government with an technology that avails stakeholders with proper data related to solid waste and its management.

Also, the preliminary idea was tested with some ongoing initiatives of such kind (Waste AtlasLetsdoitworld etc) while consultations were held with some of the organizations like The GovLabICIMOD learn from their expertise on open data as well as environmental aspects. A remarkable example of smart waste management being carried out in Ulaanbaatar, Capital of Mongolia did motivate us to test the idea in Nepal….

Nepal Waste Map Web App

Nepal Waste Map web is a composite of several features primarily focused at the following:

  1. Display of key stats and information about solid waste
  2. Admin panel to interact with the data for taking possible actions (update, edit and delete)…

Nepal Waste Map Mobile

A Mobile App primarily reflects Nepal Waste Map in the mobile phones. Most of the features resemble with the Nepal Waste Map Web App.

However, some functionalities in the app are key in terms of data aspects:

Crowdsourcing Functionality

Any public (users) who use the app can report issues related to illegal waste dumping and waste esp. Plastic burning. Example: if I saw somebody burning plastic wastes, I can use the app for reporting such an incident along with the photo as evidence as well as coordinates. The admin of the web app can view the report in a real time and take action (not limited to defined as acknowledge and marking resolved)…(More)”.

The citation graph is one of humankind’s most important intellectual achievements


Dario Taraborelli at BoingBoing: “When researchers write, we don’t just describe new findings — we place them in context by citing the work of others. Citations trace the lineage of ideas, connecting disparate lines of scholarship into a cohesive body of knowledge, and forming the basis of how we know what we know.

Today, citations are also a primary source of data. Funders and evaluation bodies use them to appraise scientific impact and decide which ideas are worth funding to support scientific progress. Because of this, data that forms the citation graph should belong to the public. The Initiative for Open Citations was created to achieve this goal.

Back in the 1950s, reference works like Shepard’s Citations provided lawyers with tools to reconstruct which relevant cases to cite in the context of a court trial. No such a tool existed at the time for identifying citations in scientific publications. Eugene Garfield — the pioneer of modern citation analysis and citation indexing — described the idea of extending this approach to science and engineering as his Eureka moment. Garfield’s first experimental Genetics Citation Index, compiled by the newly-formed Institute for Scientific Information (ISI) in 1961, offered a glimpse into what a full citation index could mean for science at large. It was distributed, for free, to 1,000 libraries and scientists in the United States.

Fast forward to the end of the 20th century. the Web of Science citation index — maintained by Thomson Reuters, who acquired ISI in 1992 — has become the canonical source for scientists, librarians, and funders to search scholarly citations, and for the field of scientometrics, to study the structure and evolution of scientific knowledge. ISI could have turned into a publicly funded initiative, but it started instead as a for-profit effort. In 2016, Thomson Reuters sold its Intellectual Property & Science business to a private-equity fund for $3.55 billion. Its citation index is now owned by Clarivate Analytics.

Raw citation data being non-copyrightable, it’s ironic that the vision of building a comprehensive index of scientific literature has turned into a billion-dollar business, with academic institutions paying cripplingly expensive annual subscriptions for access and the public locked out.

Enter the Initiative for Open Citations.

In 2016, a small group founded the Initiative for Open Citations (I4OC) as a voluntary effort to work with scholarly publishers — who routinely publish this data — to persuade them to release it in the open and promote its unrestricted availability. Before the launch of the I4OC, only 1% of indexed scholarly publications with references were making citation data available in the public domain. When the I4OC was officially announced in 2017, we were able to report that this number had shifted from 1% to 40%. In the main, this was thanks to the swift action of a small number of large academic publishers.

In April 2018, we are celebrating the first anniversary of the initiative. Since the launch, the fraction of indexed scientific articles with open citation data (as measured by Crossref) has surpassed 50% and the number of participating publishers has risen to 490Over half a billion references are now openly available to the public without any copyright restriction. Of the top-20 biggest publishers with citation data, all but 5 — Elsevier, IEEE, Wolters Kluwer Health, IOP Publishing, ACS — now make this data open via Crossref and its APIs. Over 50 organisations — including science funders, platforms and technology organizations, libraries, research and advocacy institutions — have joined us in this journey to help advocate and promote the reuse of open citations….(More)”.

Participatory Budgeting: Step to Building Active Citizenship or a Distraction from Democratic Backsliding?


David Sasaki: “Is there any there there? That’s what we wanted to uncover beneath the hype and skepticism surrounding participatory budgeting, an innovation in democracy that began in Brazil in 1989 and has quickly spread to nearly every corner of the world like a viral hashtag….We ended up selecting two groups of consultants for two phases of work. The first phase was led by three academic researchers — Brian WamplerMike Touchton and Stephanie McNulty — to synthesize what we know broadly about PB’s impact and where there are gaps in the evidence. mySociety led the second phase, which originally intended to identify the opportunities and challenges faced by civil society organizations and public officials that implement participatory budgeting. However, a number of unforeseen circumstances, including contested elections in Kenya and a major earthquake in Mexico, shifted mySociety’s focus to take a global, field-wide perspective.

In the end, we were left with two reports that were similar in scope and differed in perspective. Together they make for compelling reading. And while they come from different perspectives, they settle on similar recommendations. I’ll focus on just three: 1) the need for better research, 2) the lack of global coordination, and 3) the emerging opportunity to link natural resource governance with participatory budgeting….

As we consider some preliminary opportunities to advance participatory budgeting, we are clear-eyed about the risks and challenges. In the face of democratic backsliding and the concern that liberal democracy may not survive the 21st century, are these efforts to deepen local democracy merely a distraction from a larger threat, or is this a way to build active citizenship? Also, implementing PB is expensive — both in terms of money and time; is it worth the investment? Is PB just the latest checkbox for governments that want a reputation for supporting citizen participation without investing in the values and process it entails? Just like the proliferation of fake “consultation meetings,” fake PB could merely exacerbate our disappointment with democracy. What should we make of the rise of participatory budgeting in quasi-authoritarian contexts like China and Russia? Is PB a tool for undemocratic central governments to keep local governments in check while giving citizens a simulacrum of democratic participation? Crucially, without intentional efforts to be inclusive like we’ve seen in Boston, PB could merely direct public resources to those neighborhoods with the most outspoken and powerful residents.

On the other hand, we don’t want to dismiss the significant opportunities that come with PB’s rapid global expansion. For example, what happens when social movements lose their momentum between election cycles? Participatory budgeting could create a civic space for social movements to pursue concrete outcomes while engaging with neighbors and public officials. (In China, it has even helped address the urban-rural divide on perspectives toward development policy.) Meanwhile, social media have exacerbated our human tendency to complain, but participatory budgeting requires us to shift our perspective from complaints to engaging with others on solutions. It could even serve as a gateway to deeper forms of democratic participation and increased trust between governments, civil society organizations, and citizens. Perhaps participatory budgeting is the first step we need to rebuild our civic infrastructure and make space for more diverse voices to steer our complex public institutions.

Until we have more research and evidence, however, these possibilities remain speculative….(More)”.

Everything* You Always Wanted To Know About Blockchain (But Were Afraid To Ask)


Alice Meadows at the Scholarly Kitchen: “In this interview, Joris van Rossum (Director of Special Projects, Digital Science) and author of Blockchain for Research, and Martijn Roelandse (Head of Publishing Innovation, Springer Nature), discuss blockchain in scholarly communications, including the recently launched Peer Review Blockchain initiative….

How would you describe blockchain in one sentence?

Joris: Blockchain is a technology for decentralized, self-regulating data which can be managed and organized in a revolutionary new way: open, permanent, verified and shared, without the need of a central authority.

How does it work (in layman’s language!)?

Joris: In a regular database you need a gatekeeper to ensure that whatever is stored in a database (financial transactions, but this could be anything) is valid. However with blockchain, trust is not created by means of a curator, but through consensus mechanisms and cryptographic techniques. Consensus mechanisms clearly define what new information is allowed to be added to the datastore. With the help of a technology called hashing, it is not possible to change any existing data without this being detected by others. And through cryptography, the database can be shared without real identities being revealed. So the blockchain technology removes the need for a middle-man.

How is this relevant to scholarly communication?

Joris: It’s very relevant. We’ve explored the possibilities and initiatives in a report published by Digital Science. The blockchain could be applied on several levels, which is reflected in a number of initiatives announced recently. For example, a cryptocurrency for science could be developed. This ‘bitcoin for science’ could introduce a monetary reward scheme to researchers, such as for peer review. Another relevant area, specifically for publishers, is digital rights management. The potential for this was picked up by this blog at a very early stage. Blockchain also allows publishers to easily integrate micropayments, thereby creating a potentially interesting business model alongside open access and subscriptions.

Moreover, blockchain as a datastore with no central owner where information can be stored pseudonymously could support the creation of a shared and authoritative database of scientific events. Here traditional activities such as publications and citations could be stored, along with currently opaque and unrecognized activities, such as peer review. A data store incorporating all scientific events would make science more transparent and reproducible, and allow for more comprehensive and reliable metrics….

How do you see developments in the industry regarding blockchain?

Joris: In the last couple of months we’ve seen the launch of many interesting initiatives. For example scienceroot.comPluto.network, and orvium.io. These are all ambitious projects incorporating many of the potential applications of blockchain in the industry, and to an extent aim to disrupt the current ecosystem. Recently artifacts.ai was announced, an interesting initiative that aims to allow researchers to permanently document every stage of the research process. However, we believe that traditional players, and not least publishers, should also look at how services to researchers can be improved using blockchain technology. There are challenges (e.g. around reproducibility and peer review) but that does not necessarily mean the entire ecosystem needs to be overhauled. In fact, in academic publishing we have a good track record of incorporating new technologies and using them to improve our role in scholarly communication. In other words, we should fix the system, not break it!

What is the Peer Review Blockchain initiative, and why did you join?

Martijn: The problems of research reproducibility, recognition of reviewers, and the rising burden of the review process, as research volumes increase each year, have led to a challenging landscape for scholarly communications. There is an urgent need for change to tackle the problems which is why we joined this initiative, to be able to take a step forward towards a fairer and more transparent ecosystem for peer review. The initiative aims to look at practical solutions that leverage the distributed registry and smart contract elements of blockchain technologies. Each of the parties can deposit peer review activity in the blockchain — depending on peer review type, either partially or fully encrypted — and subsequent activity is also deposited in the reviewer’s ORCID profile. These business transactions — depositing peer review activity against person x — will be verifiable and auditable, thereby increasing transparency and reducing the risk of manipulation. Through the shared processes we will setup with other publishers, and recordkeeping, trust will increase.

A separate trend we see is the broadening scope of research evaluation which triggered researchers to also get (more) recognition for their peer review work, beyond citations and altmetrics. At a later stage new applications could be built on top of the peer review blockchain….(More)”.

App facilitates charity work in Jordan


Springwise: “We have already seen how technology can be harnessed to help facilitate charitable and environmental efforts. For example, the recycling organization which helps businesses rehome unwanted goods, donating money to charity in addition to helping businesses be more economical. Another example in which technology has been used to raise awareness is through the charity chatbot, which teaches users about women’s daily journey to find water in Ethiopia.

JoodLife is a start-up which aims to make the most of technology and take advantage of it in order to help voluntary efforts in Jordan.

The application works as a social platform to connect volunteers and donors in order to facilitate charity work. Donors can register their donations via the app, and then all the available grants are displayed. The grants can be searched for on the app, and users can specify the area they wish to search. The donor and the volunteer can then agree a mechanism by which they wish to transfer the grant. At which point the available grant will no longer be shown on the app search. The app aims to serve as a link between donors and volunteers to save both parties time and effort. This makes it much easier to make monetary and material donations. The social aspect of the app also increases solidarity between charity workers and makes it much simpler to distribute roles in the most efficient way….(More)”.

Making sense of evidence: A guide to using evidence in policy


Handbook by the Government of New Zealand: “…helps you take a structured approach to using evidence at every stage of the policy and programme development cycle. Whether you work for central or local government, or the community and voluntary sector, you’ll find advice to help you:

  • understand different types and sources of evidence
  • know what you can learn from evidence
  • appraise evidence and rate its quality
  • decide how to select and use evidence to the best effect
  • take into account different cultural values and knowledge systems
  • be transparent about how you’ve considered evidence in your policy development work…(More)”

(See also Summary; This handbook is a companion to Making sense of evaluation: A handbook for everyone.).

Algorithmic Impact Assessment (AIA) framework


Report by AINow Institute: “Automated decision systems are currently being used by public agencies, reshaping how criminal justice systems work via risk assessment algorithms1 and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment4 and educational systems through automated evaluation tools and matching algorithms.Researchers, advocates, and policymakers are debating when and where automated decision systems are appropriate, including whether they are appropriate at all in particularly sensitive domains.

Questions are being raised about how to fully assess the short and long term impacts of these systems, whose interests they serve, and if they are sufficiently sophisticated to contend with complex social and historical contexts. These questions are essential, and developing strong answers has been hampered in part by a lack of information and access to the systems under deliberation. Many such systems operate as “black boxes” – opaque software tools working outside the scope of meaningful scrutiny and accountability.8 This is concerning, since an informed policy debate is impossible without the ability to understand which existing systems are being used, how they are employed, and whether these systems cause unintended consequences. The Algorithmic Impact Assessment (AIA) framework proposed in this report is designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and to determine where – or if – their use is acceptable….

KEY ELEMENTS OF A PUBLIC AGENCY ALGORITHMIC IMPACT ASSESSMENT

1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;

2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;

3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;

4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and

5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct….(More)”.

The Power Of The Wikimedia Movement Beyond Wikimedia


Michael Bernick at Forbes: “In January 2017, we the constituents of Wikimedia, started an ambitious discussion about our collective future. We reflected on our past sixteen years together and imagined the impact we could have in the world in the next decades. Our aim was to identify a common strategic direction that would unite and inspire people across our movement on our way to 2030, and help us make decisions.”…

The final documents included a strategic direction and a research report: “Wikimedia 2030: Wikimedia’s Role in Shaping the Future of the Information Commons”: an expansive look at Wikimedia, knowledge, technologies, and communications in the next decade. It includes thoughtful sections on Demographics (global population trends, and Wikimedia’s opportunities for growth) Emerging Platforms (how Wikimedia platforms will be accessed), Misinformation (how content creators and technologists can work toward a product that is trustworthy), Literacy (changing forms of learning that can benefit from the Wikimedia movement) and the core Wikimedia issues of Open Knowledge and knowledge as a service.

Among its goals, the document calls for greater outreach to areas outside of Europe and North America (which now account for 63% of Wikimedia’s total traffic), and widening the knowledge and experiential bases of contributors. It urges greater access through mobile devices and other emerging hardware; and expanding partnerships with libraries, museums, galleries and archives.

The document captures not only the idealism of the enterprise, and but also why Wikimedia can be described as a movement not only an enterprise. It calls into question conventional wisdoms of how our political and business structures should operate.

Consider the Wikimedia editing process that seeks to reach common ground on contentious issues. Lisa Gruwell, the Chief Advancement Officer of the Wikimedia Foundation, notes that in the development of an article, often editors with diverging claims and views will weigh in.  Rather than escalating divisions, the process of editing has been found to reduce these divisions. Gruwell explains,

Through the collaborative editing process, the editors have critical discussions about what reliable sources say about a topic. They have to engage and defend their own perspectives about how an article should be represented, and ultimately find some form of common ground with other editors.

A number of researchers at Harvard Business School led by Shane Greenstein, Yuan Gu and Feng Zhu actually set out to study this phenomenon. Their findings, published in 2017 as a Harvard Business School working paper found that editors with different political viewpoints tended to dialogue with each other, and over time reduce rather than increase partisanship….(More)”.

The Potential and Practice of Data Collaboratives for Migration


Essay by Stefaan Verhulst and Andrew Young in the Stanford Social Innovation Review: “According to recent United Nations estimates, there are globally about 258 million international migrants, meaning people who live in a country other than the one in which they were born; this represents an increase of 49 percent since 2000. Of those, 26 million people have been forcibly displaced across borders, having migrated either as refugees or asylum seekers. An additional 40 million or so people are internally displaced due to conflict and violence, and millions more are displaced each year because of natural disasters. It is sobering, then, to consider that, according to many observers, global warming is likely to make the situation worse.

Migration flows of all kinds—for work, family reunification, or political or environmental reasons—create a range of both opportunities and challenges for nation states and international actors. But the issues associated with refugees and asylum seekers are particularly complex. Despite the high stakes and increased attention to the issue, our understanding of the full dimensions and root causes of refugee movements remains limited. Refugee flows arise in response to not only push factors like wars and economic insecurity, but also powerful pull factors in recipient countries, including economic opportunities, and perceived goods like greater tolerance and rule of law. In addition, more objectively measurable variables like border barriers, topography, and even the weather, play an important role in determining the number and pattern of refugee flows. These push and pull factors interact in complex and often unpredictable ways. Further complicating matters, some experts argue that push-pull research on migration is dogged by a number of conceptual and methodological limitations.

To mitigate negative impacts and anticipate opportunities arising from high levels of global migration, we need a better understanding of the various factors contributing to the international movement of people and how they work together.

Data—specifically, the widely dispersed data sets that exist across governments, the private sector, and civil society—can help alleviate today’s information shortcoming. Several recent initiatives show the potential of using data to address some of the underlying informational gaps. In particular, there is an important role for a new form of data-driven problem-solving and policymaking—what we call “data collaboratives.” Data collaboratives offer the potential for inter-sectoral collaboration, and for the merging and augmentation of otherwise siloed data sets. While public and private actors are increasingly experimenting with various types of data in a variety of sectors and geographies—including sharing disease data to accelerate disease treatments and leveraging private bus data to improve urban planning—we are only beginning to understand the potential of data collaboration in the context of migration and refugee issues….(More)”.

 

…(More)”