Use of data & technology for promoting waste sector accountability in Nepal


Saroj Bista at YoungInnovations: “All the Nepalese people are saddened to see waste abandoned in the Capital, Kathmandu. Among them, many are concerned to find solutions to such a problem, including Kathmandu City. A 2015 report stated that Kathmandu Metropolitan City (KMC) alone receives 525 tonnes of waste in a day while it manages to collect 516 tonnes out if it, meaning that 8 tonnes of waste are left/abandoned….

Despite many stakeholders including the government sector, non-governmental organizations, private sectors have been working to address the problem associated with solid waste mapping in urban sector, the problem continued to exist.

YoungInnovations and Clean Up Nepal came together to discuss if we could tackle this problemWe discussed if keeping track of everybody’s efforts as well as noticing every piece of waste in the city raises accountability of stakeholders adds a value. YoungInnovations has over a decade of experience in developing data and evidence-based tech solutions to problem. Clean Up Nepal is a civil society organization working to provide an enabling environment to improve solid waste management and water, sanitation and hygiene in Nepal by working closely with local communities and relevant stakeholders. In this, both the organizations agreed to work mixing the expertise of each other to offer the government with an technology that avails stakeholders with proper data related to solid waste and its management.

Also, the preliminary idea was tested with some ongoing initiatives of such kind (Waste AtlasLetsdoitworld etc) while consultations were held with some of the organizations like The GovLabICIMOD learn from their expertise on open data as well as environmental aspects. A remarkable example of smart waste management being carried out in Ulaanbaatar, Capital of Mongolia did motivate us to test the idea in Nepal….

Nepal Waste Map Web App

Nepal Waste Map web is a composite of several features primarily focused at the following:

  1. Display of key stats and information about solid waste
  2. Admin panel to interact with the data for taking possible actions (update, edit and delete)…

Nepal Waste Map Mobile

A Mobile App primarily reflects Nepal Waste Map in the mobile phones. Most of the features resemble with the Nepal Waste Map Web App.

However, some functionalities in the app are key in terms of data aspects:

Crowdsourcing Functionality

Any public (users) who use the app can report issues related to illegal waste dumping and waste esp. Plastic burning. Example: if I saw somebody burning plastic wastes, I can use the app for reporting such an incident along with the photo as evidence as well as coordinates. The admin of the web app can view the report in a real time and take action (not limited to defined as acknowledge and marking resolved)…(More)”.

Data rights are civic rights: a participatory framework for GDPR in the US?


Elena Souris and Hollie Russon Gilman at Vox: “…While online rights are coming into question, it’s worth considering how those will overlap with offline rights and civic engagement.

The two may initially seem completely separate, but democracy itself depends on information and communication, and a balance of privacy (secret ballot) and transparency. As communication moves almost entirely to networked online technology platforms, the governance questions surrounding data and privacy have far-reaching civic and political implications for how people interact with all aspects of their lives, from commerce and government services to their friends, families, and communities. That is why we need a conversation about data protections, empowering users with their own information, and transparency — ultimately, data rights are now civic rights…

What could a golden mean in the US look like? Is it possible to take principles of the GDPR and apply a more community based, citizen-centric approach across states and localities in the United States? Could a US version of the GDPR be designed in a way that included public participation? Perhaps there could be an ongoing participatory role? Most of all, the questions underpinning data regulation need to serve as an impetus for an honest conversation about equity across digital access, digital literacy, and now digital privacy.

Across the country, we’re already seeing successful experiments with a more citizen-inclusive democracy, with localities and cities rising as engines of American re-innovationand laboratories of participatory democracy. Thanks to our federalist system, states are already paving the way for greater electoral reform, from public financing of campaigns to experiments with structures such as ranked-choice voting.

In these local federalist experiments, civic participation is slowly becoming a crucial tool. Innovations from participatory budgeting to interactive policy co-production sessions are giving people in communities a direct say in public policies. For example, the Rural Climate Dialogues in Minnesota empower rural residents to impact policy on long-term climate mitigation. Bowling Green, Kentucky, recently used the online deliberation platform Polisto identify common policy areas for consensus building. Scholars have been writing about various potential participatory models for our digital lives as well, including civic trusts.

Can we take these principles and begin a serious conversation for how to translate the best privacy practices, tools, and methods to ensure that people’s valuable online and offline resources — including their trust, attention span, and vital information — are also protected and honored? Since the people are a primary stakeholder in the conversation about civic data and data privacy, they should have a seat at the table.

Including citizens and residents in these conversations could have a big policy impact. First, working toward a participatory governance framework for civic data would enable people to understand the value of their data in the open market. Second, it would provide greater transparency to the value of networks — an individual’s social graph, a valuable asset, which, until now, people are generating in aggregate without anything in return. Third, it could amplify concerns of more vulnerable data users, including elderly or tech-illiterate citizens — and even refugees and international migrants, as Andrew Young and Stefaan Verhulst recently argued in the Stanford Social Innovation Review.

There are already templates and road maps for responsible data, but talking to those users themselves with a participatory governance approach could make them even more effective. Finally, citizens can help answer tough questions about what we value and when and how we need to make ethical choices with data.

Because data-collecting organizations will have to comply abroad soon, the GDPR is a good opportunity for the American social sector to consider data rights as civic rights and incorporate a participatory process to meet this challenge. Instead of simply assuming regulatory agencies will pave the way, a more participatory data framework could foster an ongoing process of civic empowerment and make the outcome more effective. It’s too soon to know the precise forms or mechanisms new data regulation should take. Instead of a rigid, predetermined format, the process needs to be community-driven by design — ensuring traditionally marginalized communities are front and center in this conversation, not only the elites who already hold the microphone.

It won’t be easy. Building a participatory governance structure for civic data will require empathy, compromise, and potentially challenging the preconceived relationship between people, institutions, and their information. The interplay between our online and offline selves is a continuous process of learning error. But if we simply replicate the top-down structures of the past, we can’t evolve toward a truly empowered digital democratic future. Instead, let’s use the GDPR as an opening in the United States for advancing the principles of a more transparent and participatory democracy….(More)”.

The citation graph is one of humankind’s most important intellectual achievements


Dario Taraborelli at BoingBoing: “When researchers write, we don’t just describe new findings — we place them in context by citing the work of others. Citations trace the lineage of ideas, connecting disparate lines of scholarship into a cohesive body of knowledge, and forming the basis of how we know what we know.

Today, citations are also a primary source of data. Funders and evaluation bodies use them to appraise scientific impact and decide which ideas are worth funding to support scientific progress. Because of this, data that forms the citation graph should belong to the public. The Initiative for Open Citations was created to achieve this goal.

Back in the 1950s, reference works like Shepard’s Citations provided lawyers with tools to reconstruct which relevant cases to cite in the context of a court trial. No such a tool existed at the time for identifying citations in scientific publications. Eugene Garfield — the pioneer of modern citation analysis and citation indexing — described the idea of extending this approach to science and engineering as his Eureka moment. Garfield’s first experimental Genetics Citation Index, compiled by the newly-formed Institute for Scientific Information (ISI) in 1961, offered a glimpse into what a full citation index could mean for science at large. It was distributed, for free, to 1,000 libraries and scientists in the United States.

Fast forward to the end of the 20th century. the Web of Science citation index — maintained by Thomson Reuters, who acquired ISI in 1992 — has become the canonical source for scientists, librarians, and funders to search scholarly citations, and for the field of scientometrics, to study the structure and evolution of scientific knowledge. ISI could have turned into a publicly funded initiative, but it started instead as a for-profit effort. In 2016, Thomson Reuters sold its Intellectual Property & Science business to a private-equity fund for $3.55 billion. Its citation index is now owned by Clarivate Analytics.

Raw citation data being non-copyrightable, it’s ironic that the vision of building a comprehensive index of scientific literature has turned into a billion-dollar business, with academic institutions paying cripplingly expensive annual subscriptions for access and the public locked out.

Enter the Initiative for Open Citations.

In 2016, a small group founded the Initiative for Open Citations (I4OC) as a voluntary effort to work with scholarly publishers — who routinely publish this data — to persuade them to release it in the open and promote its unrestricted availability. Before the launch of the I4OC, only 1% of indexed scholarly publications with references were making citation data available in the public domain. When the I4OC was officially announced in 2017, we were able to report that this number had shifted from 1% to 40%. In the main, this was thanks to the swift action of a small number of large academic publishers.

In April 2018, we are celebrating the first anniversary of the initiative. Since the launch, the fraction of indexed scientific articles with open citation data (as measured by Crossref) has surpassed 50% and the number of participating publishers has risen to 490Over half a billion references are now openly available to the public without any copyright restriction. Of the top-20 biggest publishers with citation data, all but 5 — Elsevier, IEEE, Wolters Kluwer Health, IOP Publishing, ACS — now make this data open via Crossref and its APIs. Over 50 organisations — including science funders, platforms and technology organizations, libraries, research and advocacy institutions — have joined us in this journey to help advocate and promote the reuse of open citations….(More)”.

Inside the Jordan refugee camp that runs on blockchain


Russ Juskalian at MIT Tech Review: “…Though Bassam may not know it, his visit to the supermarket involves one of the first uses of blockchain for humanitarian aid. By letting a machine scan his iris, he confirmed his identity on a traditional United Nations database, queried a family account kept on a variant of the Ethereum blockchain by the World Food Programme (WFP), and settled his bill without opening his wallet.

Started in early 2017, Building Blocks, as the program is known, helps the WFP distribute cash-for-food aid to over 100,000 Syrian refugees in Jordan. By the end of this year, the program will cover all 500,000 refugees in the country. If the project succeeds, it could eventually speed the adoption of blockchain technologies at sister UN agencies and beyond.

Building Blocks was born of a need to save money. The WFP  helps feed 80 million people around the globe, but since 2009 the organization has shifted from delivering food to transferring money to people who need food. This approach could feed more people, improve local economies, and increase transparency. But it also introduces a notable point of inefficiency: working with local or regional banks. For the WFP, which transferred over $1.3 billion in such benefits in 2017 (about 30 percent of its total aid), transaction and other fees are money that could have gone to millions of meals. Early results of the blockchain program touted a 98 percent reduction in such fees.

And if the man behind the project, WFP executive Houman Haddad, has his way, the blockchain-based program will do far more than save money. It will tackle a central problem in any humanitarian crisis: how do you get people without government identity documents or a bank account into a financial and legal system where those things are prerequisites to getting a job and living a secure life?

Haddad imagines Bassam one day walking out of Zaatari with a so-called digital wallet, filled with his camp transaction history, his government ID, and access to financial accounts, all linked through a blockchain-based identity system. With such a wallet, when Bassam left the camp he could much more easily enter the world economy. He would have a place for an employer to deposit his pay, for a mainstream bank to see his credit history, and for a border or immigration agent to check his identity, which would be attested to by the UN, the Jordanian government, and possibly even his neighbors….

But because Building Blocks runs on a small, permissioned blockchain, the project’s scope and impact are narrow. So narrow that some critics say it’s a gimmick and the WFP could just as easily use a traditional database. Haddad acknowledges that—“Of course we could do all of what we’re doing today without using blockchain,” he says. But, he adds, “my personal view is that the eventual end goal is digital ID, and beneficiaries must own and control their data.”

Other critics say blockchains are too new for humanitarian use. Plus, it’s ethically risky to experiment with vulnerable populations, says Zara Rahman, a researcher based in Berlin at the Engine Room, a nonprofit group that supports social-change organizations in using technology and data. After all, the bulk collection of identifying information and biometrics has historically been a disaster for people on the run….(More)”.

Everything* You Always Wanted To Know About Blockchain (But Were Afraid To Ask)


Alice Meadows at the Scholarly Kitchen: “In this interview, Joris van Rossum (Director of Special Projects, Digital Science) and author of Blockchain for Research, and Martijn Roelandse (Head of Publishing Innovation, Springer Nature), discuss blockchain in scholarly communications, including the recently launched Peer Review Blockchain initiative….

How would you describe blockchain in one sentence?

Joris: Blockchain is a technology for decentralized, self-regulating data which can be managed and organized in a revolutionary new way: open, permanent, verified and shared, without the need of a central authority.

How does it work (in layman’s language!)?

Joris: In a regular database you need a gatekeeper to ensure that whatever is stored in a database (financial transactions, but this could be anything) is valid. However with blockchain, trust is not created by means of a curator, but through consensus mechanisms and cryptographic techniques. Consensus mechanisms clearly define what new information is allowed to be added to the datastore. With the help of a technology called hashing, it is not possible to change any existing data without this being detected by others. And through cryptography, the database can be shared without real identities being revealed. So the blockchain technology removes the need for a middle-man.

How is this relevant to scholarly communication?

Joris: It’s very relevant. We’ve explored the possibilities and initiatives in a report published by Digital Science. The blockchain could be applied on several levels, which is reflected in a number of initiatives announced recently. For example, a cryptocurrency for science could be developed. This ‘bitcoin for science’ could introduce a monetary reward scheme to researchers, such as for peer review. Another relevant area, specifically for publishers, is digital rights management. The potential for this was picked up by this blog at a very early stage. Blockchain also allows publishers to easily integrate micropayments, thereby creating a potentially interesting business model alongside open access and subscriptions.

Moreover, blockchain as a datastore with no central owner where information can be stored pseudonymously could support the creation of a shared and authoritative database of scientific events. Here traditional activities such as publications and citations could be stored, along with currently opaque and unrecognized activities, such as peer review. A data store incorporating all scientific events would make science more transparent and reproducible, and allow for more comprehensive and reliable metrics….

How do you see developments in the industry regarding blockchain?

Joris: In the last couple of months we’ve seen the launch of many interesting initiatives. For example scienceroot.comPluto.network, and orvium.io. These are all ambitious projects incorporating many of the potential applications of blockchain in the industry, and to an extent aim to disrupt the current ecosystem. Recently artifacts.ai was announced, an interesting initiative that aims to allow researchers to permanently document every stage of the research process. However, we believe that traditional players, and not least publishers, should also look at how services to researchers can be improved using blockchain technology. There are challenges (e.g. around reproducibility and peer review) but that does not necessarily mean the entire ecosystem needs to be overhauled. In fact, in academic publishing we have a good track record of incorporating new technologies and using them to improve our role in scholarly communication. In other words, we should fix the system, not break it!

What is the Peer Review Blockchain initiative, and why did you join?

Martijn: The problems of research reproducibility, recognition of reviewers, and the rising burden of the review process, as research volumes increase each year, have led to a challenging landscape for scholarly communications. There is an urgent need for change to tackle the problems which is why we joined this initiative, to be able to take a step forward towards a fairer and more transparent ecosystem for peer review. The initiative aims to look at practical solutions that leverage the distributed registry and smart contract elements of blockchain technologies. Each of the parties can deposit peer review activity in the blockchain — depending on peer review type, either partially or fully encrypted — and subsequent activity is also deposited in the reviewer’s ORCID profile. These business transactions — depositing peer review activity against person x — will be verifiable and auditable, thereby increasing transparency and reducing the risk of manipulation. Through the shared processes we will setup with other publishers, and recordkeeping, trust will increase.

A separate trend we see is the broadening scope of research evaluation which triggered researchers to also get (more) recognition for their peer review work, beyond citations and altmetrics. At a later stage new applications could be built on top of the peer review blockchain….(More)”.

A Tool to Help Nonprofits Find Each Other, Pursue Funding and Collaborate


DrexelNow: “…More than 40 percent of Philly nonprofit organizations operate on margins of zero or less, and fewer can be considered financially strong. With more than half of Philly’s nonprofits operating on a slim-to-none budget with limited support staff – one Drexel University researcher sought to help streamline their fundraising process by giving them easy access to data from the Internal Revenue Service and the U.S. Census. His goal: Create a tool that makes information about nonprofit organizations, and the communities they’re striving to help, more accessible to likeminded charities and the philanthropic organizations that seek to fund them.

When the IRS recently released millions of records on the finances and operations of nonprofit organizations in format that can be downloaded and analyzed, it was expected that this would usher in a new era of transparency and innovation for the nonprofit sector. Instead, many technical issues made the data virtually unusable by nonprofit organizations.

Single-page location intelligence tool: http://bit.ly/PhillyNPOs
Single-page location intelligence tool: http://bit.ly/PhillyNPOs

 Neville Vakharia, an assistant professor and research director in Drexel’s graduate Arts Administration program in the Westphal College of Media Arts & Design, tackled this issue by creating ImpactView Philadelphia, an online tool and resource that uses the publicly available data on nonprofit organizations to present an easy-to-access snapshot of Philadelphia’s nonprofit ecosystem.

Vakharia combined the publicly available data from the IRS with the most recent American Community Survey data released by the U.S. Census Bureau. These data were combined with a map of Philadelphia to create a visual database easily searchable by organization, address or zip code. Once an organization is selected, the analysis tools allow the user to see data on the map, alongside measures of households and individuals surrounding the organization — important information for nonprofits to have when they are applying for grants or looking for partners.

“Through the location intelligence visualizer, users can immediately find areas of need and potential collaborators. The data are automatically visualized and mapped on-screen, identifying, for example, pockets of high poverty with large populations of children as well as the nonprofit service providers in these areas,” said Vakharia. “Making this data accessible for nonprofits will cut down on time spent seeking information and improve the ability to make data-informed decisions, while also helping with case making and grant applications.”…(More)”.

To serve a free society, social media must evolve beyond data mining


Barbara Romzek and Aram Sinnreich at The Conversation: “…For years, watchdogs have been warning about sharing information with data-collecting companies, firms engaged in the relatively new line of business called some academics have called “surveillance capitalism.” Most casual internet users are only now realizing how easy – and common – it is for unaccountable and unknown organizations to assemble detailed digital profiles of them. They do this by combining the discrete bits of information consumers have given up to e-tailers, health sites, quiz apps and countless other digital services.

As scholars of public accountability and digital media systems, we know that the business of social media is based on extracting user data and offering it for sale. There’s no simple way for them to protect data as many users might expect. Like the social pollution of fake news, bullying and spam that Facebook’s platform spreads, the company’s privacy crisis also stems from a power imbalance: Facebook knows nearly everything about its users, who know little to nothing about it.

It’s not enough for people to delete their Facebook accounts. Nor is it likely that anyone will successfully replace it with a nonprofit alternativecentering on privacy, transparency and accountability. Furthermore, this problem is not specific just to Facebook. Other companies, including Google and Amazon, also gather and exploit extensive personal data, and are locked in a digital arms race that we believe threatens to destroy privacy altogether….

Governments need to be better guardians of public welfare – including privacy. Many companies using various aspects of technology in new ways have so far avoided regulation by stoking fears that rules might stifle innovation. Facebook and others have often claimed that they’re better at regulating themselves in an ever-changing environment than a slow-moving legislative process could be….

To encourage companies to serve democratic principles and focus on improving people’s lives, we believe the chief business model of the internet needs to shift to building trust and verifying information. While it won’t be an immediate change, social media companies pride themselves on their adaptability and should be able to take on this challenge.

The alternative, of course, could be far more severe. In the 1980s, when federal regulators decided that AT&T was using its power in the telephone market to hurt competition and consumers, they forced the massive conglomerate to break up. A similar but less dramatic change happened in the early 2000s when cellphone companies were forced to let people keep their phone numbers even if they switched carriers.

Data, and particularly individuals’ personal data, are the precious metals of the internet age. Protecting individual data while expanding access to the internet and its many social benefits is a fundamental challenge for free societies. Creating, using and protecting data properly will be crucial to preserving and improving human rights and civil liberties in this still young century. To meet this challenge will require both vigilance and vision, from businesses and their customers, as well as governments and their citizens….(More).

The Scientific Paper Is Obsolete


James Somers in The Atlantic: “The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today?…(More).

A New Model for Industry-Academic Partnerships


Working Paper by Gary King and Nathaniel Persily: “The mission of the academic social sciences is to understand and ameliorate society’s greatest challenges. The data held by private companies holds vast potential to further this mission. Yet, because of its interaction with highly politicized issues, customer privacy, proprietary content, and differing goals of firms and academics, these data are often inaccessible to university researchers.

We propose here a new model for industry-academic partnerships that addresses these problems via a novel organizational structure: Respected scholars form a commission which, as a trusted third party, receives access to all relevant firm information and systems, and then recruits independent academics to do research in specific areas following standard peer review protocols organized and funded by nonprofit foundations.

We also report on a partnership we helped forge under this model to make data available about the extremely visible and highly politicized issues surrounding the impact of social media on elections and democracy. In our partnership, Facebook will provide privacy-preserving data and access; seven major politically and substantively diverse nonprofit foundations will fund the research; and the Social Science Research Council will oversee the peer review process for funding and data access….(More)”.

From Crowdsourcing to Extreme Citizen Science: Participatory Research for Environmental Health


P.B. English, M.J. Richardson, and C. Garzón-Galvis in the Annual Review of Public Health: “Environmental health issues are becoming more challenging, and addressing them requires new approaches to research design and decision-making processes. Participatory research approaches, in which researchers and communities are involved in all aspects of a research study, can improve study outcomes and foster greater data accessibility and utility as well as increase public transparency. Here we review varied concepts of participatory research, describe how it complements and overlaps with community engagement and environmental justice, examine its intersection with emerging environmental sensor technologies, and discuss the strengths and limitations of participatory research. Although participatory research includes methodological challenges, such as biases in data collection and data quality, it has been found to increase the relevance of research questions, result in better knowledge production, and impact health policies. Improved research partnerships among government agencies, academia, and communities can increase scientific rigor, build community capacity, and produce sustainable outcomes….(More)”