This free online encyclopedia has achieved what Wikipedia can only dream of


Nikhil Sonnad at Quartz: “The Stanford Encyclopedia of Philosophy may be the most interesting website on the internet. Not because of the content—which includes fascinating entries on everything from ambiguity to zombies—but because of the site itself.

Its creators have solved one of the internet’s fundamental problems: How to provide authoritative, rigorously accurate knowledge, at no cost to readers. It’s something the encyclopedia, or SEP, has managed to do for two decades.

The internet is an information landfill. Somewhere in it—buried under piles of opinion, speculation, and misinformation—is virtually all of human knowledge. The story of the SEP shows that it is possible to create a less trashy internet.  But sorting through the trash is difficult work. Even when you have something you think is valuable, it often turns out to be a cheap knock-off.

The story of how the SEP is run, and how it came to be, shows that it is possible to create a less trashy internet—or at least a less trashy corner of it. A place where actual knowledge is sorted into a neat, separate pile instead of being thrown into the landfill. Where the world can go to learn everything that we know to be true. Something that would make humans a lot smarter than the internet we have today.

The impossible trinity of information

The online SEP has humble beginnings. Edward Zalta, a philosopher at Stanford’s Center for the Study of Language and Information, launched it way back in September 1995, with just two entries.

Philosophizing, pre-internet.(Flickr/Erik Drost—CC-BY-2.0)

That makes it positively ancient in internet years. Even Wikipedia is only 14. ….

John Perry, the director of the center, was the one who first suggested a dictionary of philosophical terms. But Zalta had bigger ideas. He and two co-authors later described the challenge in a 2002 paper (pdf, p. 1):

A fundamental problem faced by the general public and the members of an academic discipline in the information age is how to find the most authoritative, comprehensive, and up-to-date information about an important topic.

That paper is so old that it mentions “CD-ROMs” in the second sentence. But for all the years that have passed, the basic problem remains unsolved.  The requirements are an “impossible trinity”—like having your cake, eating it, and then bringing it to another party. The three requirements the authors list—”authoritative, comprehensive, and up-to-date”—are to information what the “impossible trinity” is to economics. You can only ever have one or two at once. It is like having your cake, eating it, and then bringing it to another party.

Yet if the goal is to share with people what is true, it is extremely important for a resource to have all of these things. It must be trusted. It must not leave anything out. And it must reflect the latest state of knowledge. Unfortunately, all of the other current ways of designing an encyclopedia very badly fail to meet at least one of these requirements.

Where other encyclopedias fall short

Book

Authoritative: √

Comprehensive: X

Up-to-date: X

Printed encyclopedias: still a thing(Princeton University Press)

Printed books are authoritative: Readers trust articles they know have been written and edited by experts. Books also produce a coherent overview of a subject, as the editors consider how each entry fits into the whole. But they become obsolete whenever new research comes out. Nor can a book (or even a set of volumes) be comprehensive, except perhaps for a very narrow discipline; there’s simply too much to print.

Crowdsourcing

Authoritative: X

Comprehensive: X

Up-to-date: √

A crowdsourced online encyclopedia has the virtue of timeliness. Thanks to Wikipedia’s vibrant community of non-experts, its entries on breaking-news events are often updated as they happen. But except perhaps in a few areas in which enough well-informed people care for errors to get weeded out, Wikipedia is not authoritative.  Basic mathematics entries on Wikipedia were a “a hot mess of error, arrogance, obscurity, and nonsense.”  One math professor reviewed basic mathematics entries and found them to be a “a hot mess of error, arrogance, obscurity, and nonsense.” Nor is it comprehensive: Though it has nearly 5 million articles in the English-language version alone, seemingly in every sphere of knowledge, fewer than 10,000 are “A-class” or better, the status awarded to articles considered “essentially complete.”

Speaking of holes, the SEP has a rather detailed entry on the topic of holes, and it rather nicely illustrates one of Wikipedia’s key shortcomings. Holes present a tricky philosophical problem, the SEP entry explains: A hole is nothing, but we refer to it as if it were something. (Achille Varzi, the author of the holes entry, was called upon in the US presidential election in 2000 toweigh in on the existential status of hanging chads.) If you ask Wikipedia for holes it gives you the young-adult novel Holes and the band Hole.

In other words, holes as philosophical notions are too abstract for a crowdsourced venue that favors clean, factual statements like a novel’s plot or a band’s discography. Wikipedia’s bottom-up model could never produce an entry on holes like the SEP’s.

Crowdsourcing + voting

Authoritative: ?

Comprehensive: X

Up-to-date: ?

A variation on the wiki model is question-and-answer sites like Quora (general interest) and StackOverflow (computer programming), on which users can pose questions and write answers. These are slightly more authoritative than Wikipedia, because users also vote answers up or down according to how helpful they find them; and because answers are given by single, specific users, who are encouraged to say why they’re qualified (“I’m a UI designer at Google,” say).

But while there are sometimes ways to check people’s accreditation, it’s largely self-reported and unverified. Moreover, these sites are far from comprehensive. Any given answer is only as complete as its writer decides or is able to make it. And the questions asked and answered tend to reflect the interests of the sites’ users, which in both Quora and StackOverflow’s cases skew heavily male, American, and techie.

Moreover, the sites aren’t up-to-date. While they may respond quickly to new events, answers that become outdated aren’t deleted or changed but stay there, burdening the site with a growing mass of stale information.

The Stanford solution

So is the impossible trinity just that—impossible? Not according to Zalta. He imagined a different model for the SEP: the “dynamic reference work.”

Dynamic reference work

Authoritative: √

Comprehensive: √

Up-to-date: √

To achieve authority, several dozen subject editors—responsible for broad areas like “ancient philosophy” or “formal epistemology”—identify topics in need of coverage, and invite qualified philosophers to write entries on them. If the invitation is accepted, the author sends an outline to the relevant subject editors.

 This is not somebody randomly deciding to answer a question on Quora. “An editor works with the author to get an optimal outline before the author begins to write,” says Susanna Siegel, subject editor for philosophy of mind. “Sometimes there is a lot of back and forth at this stage.” Editors may also reject entries. Zalta and Uri Nodelman, the SEP’s senior editor, say that this almost never happens. In the rare cases when it does, the reason is usually that an entry is overly biased. In short, this is not somebody randomly deciding to answer a question on Quora.

An executive editorial board—Zalta, Nodelman, and Colin Allen—works to make the SEP comprehensive….(More)”

Collective Intelligence Meets Medical Decision-Making


Paper by Max Wolf, Jens Krause, Patricia A. Carney, Andy Bogart, Ralf H. Kurvers indicating that “The Collective Outperforms the Best Radiologist”: “While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence….(More)”

Crowdsourcing a solution works best if some don’t help


Sarah Scoles at the New Scientist: “There are those who edit Wikipedia entries for accuracy – and those who use the online encyclopaedia daily without ever contributing. A new mathematical model says that’s probably as it should be: crowdsourcing a problem works best when a certain subset of the population chooses not to participate.

“In most social undertakings, there is a group that actually joins forces and works,” says Zoran Levnajic at the University of Ljubljana, Slovenia. “And there is a group of free-riders that typically benefits from work being done, without contributing much.”

Levnajic and his colleagues simulated this scenario. Digital people in a virtual population each had a randomly assigned tendency to collaborate on a problem or “freeload” – working alone and not sharing their findings. The team ran simulations to see whether there was an optimum crowdsource size for problem-solving.

It turned out there was – and surprisingly, the most effective crowd was not the largest possible. In fact, the simulated society was at its problem-solving best when just half the population worked together.

Smaller crowds contained too few willing collaborators with contrasting but complementary perspectives to effectively solve a problem. But if the researchers ran simulations with larger crowds, the freeloaders it contained naturally “defected” to working alone – knowing that they could benefit from any solutions the crowd reached, while also potentially reaping huge benefits if they could solve the problem without sharing the result (arxiv.org/abs/1506.09155)….(More)”

Making Open Innovation Ecosystems Work: Case Studies in Healthcare


New paper by Donald E. Wynn, Jr.Renee M. E. Pratt and Randy V. Bradley for the Business of Government Center: “In the mist of tightening budgets, many government agencies are being asked to deliver innovative solutions to operational and strategic problems. One way to address this dilemma is to participate in open innovation. This report addresses two key components of open innovation:

  • Adopting external ideas from private firms, universities, and individuals into the agency’s innovation practices
  • Pushing innovations developed internally to the public by reaching out to external channels

To illustrate how open innovation can work, the authors employ the concept of the technological ecosystem to demonstrate that fostering innovations cannot be done alone.

Successful technological ecosystems create innovation through the combination of five key elements:

  1. Resources – the contribution made and exchanged among the participants of an ecosystem
  2. Participants – the characteristics of the participants
  3. Relationships – the relationships and interaction among the participants
  4. Organization –of the ecosystem as a whole
  5. External environment in which the ecosystem operates

This report examines both strategies by studying two cases of government-sponsored participation in technological ecosystems in the health care industry:

  • The U.S. Department of Veterans Affairs (VA) built a new ecosystem around its VistA electronic health records software in order to better facilitate the flow of innovation practices and processes between the VA and external agencies and private firms.
  • The state of West Virginia selected a variant of the VistA software for deployment in its hospital system, saving a significant amount of money while introducing a number of new features and functionality for the seven medical facilities.

As a result of these studies, the authors have identified 10 best practices for agencies seeking to capi­talize on open innovation.  These best practices include encouraging openness and transparency, minimizing internal friction and bureaucracy, and continuously monitoring external conditions….(More)”

Video app provides underserved clients with immediate legal advice


Springwise: “Pickle is a video call app that gives everyone access to a greater understanding of their constitutional rights, via on-demand legal advice.

Legal representation is expensive and we have already seen platforms in the US and the UK use crowdfunding to help underprivileged clients fund legal battles. Now, Pickle Legal is helping in a different way — it enables video calls between clients and attorneys, which will give everyone access to a greater understanding of their constitutional rights.

Pickle connects clients with legal representation via real-time video communication. Anyone in need of legal advice can download the app to their smartphone. When they launch the app, Pickle alerts their network of attorneys and connects the client with an available professional via a video call. The client can then gain immediate advice from the attorney — helping them to understand their position and rights in the moment.

Pickle Legal is currently in Beta and accepting applications from attorneys and clients alike. During the testing phase, the service is available for free, but eventually clients will pay an affordable rate — since the convenience of the platform is expected to reduce costs. Pickle will also be archiving videos — at the discretion of the parties involved — for use in any case that arises…(More)”

Civic Jazz in the New Maker Cities


 at Techonomy: “Our civic innovation movement is about 6 years old.  It began when cities started opening up data to citizens, journalists, public-sector companies, non-profits, and government agencies.  Open data is an invitation: it’s something to go to work on— both to innovate and to create a more transparent environment about what works and what doesn’t.  I remember when we first opened data in SF and began holding conferences and hackathons. In short order we saw a community emerge with remarkable capacity to contribute to, tinker with, hack, explore and improve the city.

Early on this took the form of visualizing data, like crime patterns in Oakland. This was followed by engagement: “Look, the police are skating by and not enforcing prostitution laws. Lets call them on it!”   Civic hackathons brought together journalists, software developers, hardware people, and urbanists. I recall when artists teamed with the Arup engineering firm to build noise sensors and deployed them in the Tenderloin neighborhood (with absolutely no permission from anybody). Noise was an issue. How could you understand the problem unless you measured it?

Something as wonky as an API invited people in, at which point a sense of civic possibility and wonder set in. Suddenly whole swaths of the city were working on the city.  During the SF elections four years ago Gray Area Foundation for the Arts (which I chair) led a project with candidates, bureaucrats, and hundreds of volunteers for a summer-long set of hackathons and projects. We were stunned so many people would come together and collaborate so broadly. It was a movement, fueled by a sense of agency and informed by social media. Today cities are competing on innovation. It has become a movement.

All this has been accelerated by startups, incubators, and the economy’s whole open innovation conversation.  Remarkably, we now see capital from flowing in to support urban and social ventures where we saw none just a few years ago. The accelerator Tumml in SF is a premier example, but there are similar efforts in many cities.

This initial civic innovation movement was focused on apps and data, a relatively easy place to start. With such an approach you’re not contending for real estate or creating something that might gentrify neighborhoods. Today this movement is at work on how we design the city itself.  As millennials pour in and cities are where most of us live, enormous experimentation is at play. Ours is a highly interdisciplinary age, mixing new forms of software code and various physical materials, using all sorts of new manufacturing techniques.

Brooklyn is a great example.  A few weeks ago I met with Bob Bland, CEO of Manufacture New York. This ambitious 160,000 square foot public/private partnership is reimagining the New York fashion business. In one place it co-locates contract manufacturers, emerging fashion brands and advanced fashion research. Think wearables, sensors, smart fabrics, and the application of advanced manufacturing to fashion. By bringing all these elements under one roof, the supply chain can be compressed, sped-up, and products made more innovative.

New York City’s Economic Development office envisions a local urban supply chain that can offer a scalable alternative to the giant extended global one. In fashion it makes more and more sense for brands to be located near their suppliers. Social media speeds up fashion cycles, so we’re moving beyond predictable seasons and looks specified ahead of time. Manufacturers want to place smaller orders more frequently, so they can take less inventory risk and keep current with trends.

When you put so much talent in one space, creativity flourishes. In fashion, unlike tech, there isn’t a lot of IP protection. So designers can riff off each other’s idea and incorporate influences as artists do. What might be called stealing ideas in the software business is seen in fashion as jazz and a way to create a more interesting work environment.

A few blocks away is the Brooklyn Navy Yard, a mammoth facility at the center of New York’s emerging maker economy. …In San Francisco this urban innovation movement is working on the form of the city itself. Our main boulevard, Market Street, is to be reimagined, repaved, and made greener with far fewer private vehicles over the next two years. Our planning department, in concert with art organizations here, has made citizen-led urban prototyping the centerpiece of the planning process….(More)”

Can the crowd deliver more open government?


  at GovernmentNews: “…Crowdsourcing and policy making was the subject of a lecture by visiting academic Dr Tanja Aitamurto at Victoria’s Swinburne University of Technology earlier this month. Dr Aitamurto wrote “Crowdsourcing for Democracy: New Era in Policy-Making” and led the design and implementation of the Finnish Experiment, a pioneering case study in crowdsourcing policy making.

She spoke about how Scandinavian countries have used crowdsourcing to “tap into the collective intelligence of a large and diverse crowd” in an “open ended knowledge information search process” in an open call for anybody to participate online and complete a task.

It has already been used widely and effectively by companies  such as Proctor and Gamble who offer a financial reward for solutions to their R&D problems.

The Finnish government recently used crowdsourcing when it came to reform the country’s Traffic Act following a rash of complaints to the Minister of the Environment about it. The Act, which regulates issues such as off-road traffic, is an emotive issue in Finland where snow mobiles are used six months of the year and many people live in remote areas.

The idea was for people to submit problems and solutions online, covering areas such as safety, noise, environmental protection, the rights of snowmobile owners and landowners’ rights. Everyone could see what was written and could comment on it.

Dr Aitamurto said crowdsourcing had four stages:

• The problem mapping space, where people were asked to outline the issues that needed solving
• An appeal for solutions
• An expert panel evaluated the comments received based on the criteria of: effectiveness, cost efficiency, ease of implementation and fairness. The crowd also had the chance to evaluate and rank solutions online
• The findings were then handed over to the government for the law writing process

Dr Aitamurto said active participation seemed to create a strong sense of empowerment for those involved.

She said some people reported that it was the first time in their lives they felt they were really participating in democracy and influencing decision making in society. They said it felt much more real than voting in an election, which felt alien and remote.

“Participation becomes a channel for advocacy, not just for self-interest but a channel to hear what others are saying and then also to make yourself heard. People expected a compromise at the end,” Dr Aitamurto said.

Being able to participate online was ideal for people who lived remotely and turned crowdsourcing into a democratic innovation which brought citizens closer to policy and decision making between elections.

Other benefits included reaching out to tap into new pools of knowledge, rather than relying on a small group of homogenous experts to solve the problem.

“When we use crowdsourcing we actually extend our knowledge search to multiple, hundreds of thousands of distant neighbourhoods online and that can be the power of crowdsourcing: to find solutions and information that we wouldn’t find otherwise. We find also unexpected information because it’s a self-selecting crowd … people that we might not have in our networks already,” Dr Aitamurto said.

The process can increase transparency as people interact on online platforms and where the government keeps feedback loops going.

Dr Aitamurto is also a pains to highlight what crowdsourcing is not and cannot be, because participants are self-selecting and not statistically representative.

“The crowd doesn’t make decisions, it provides information. It’s not a method or tool for direct democracy and it’s not a public opinion poll either”.

Crowdsourcing has fed into policy in other countries too, for example, during Iceland’s constitutional reform and in the United States where the federal Emergency Management Agency overhauled its strategy after a string of natural disasters.

Australian government has been getting in on the act using cloud-based software Citizen Space to gain input into a huge range of topics. While much of it is technically consultation, rather than feeding into actual policy design, it is certainly a step towards more open government.

British company Delib, which is behind the software, bills it as “managing, publicising and archiving all of your organisation’s consultation activity”.

One council who has used Citizens Space is Wyong Shire on the NSW Central Coast. The council has used the consultation hub to elicit ratepayers’ views on a number of topics, including a special rate variation, community precinct forums, strategic plans and planning decisions.

One of Citizen Space’s most valuable features is the section ‘we asked, you said, we did’….(More)”

Outcome-driven open innovation at NASA


New paper by Jennifer L. Gustetic et al in Space Policy: “In an increasingly connected and networked world, the National Aeronautics and Space Administration (NASA) recognizes the value of the public as a strategic partner in addressing some of our most pressing challenges. The agency is working to more effectively harness the expertise, ingenuity, and creativity of individual members of the public by enabling, accelerating, and scaling the use of open innovation approaches including prizes, challenges, and crowdsourcing. As NASA’s use of open innovation tools to solve a variety of types of problems and advance of number of outcomes continues to grow, challenge design is also becoming more sophisticated as our expertise and capacity (personnel, platforms, and partners) grows and develops. NASA has recently pivoted from talking about the benefits of challenge-driven approaches, to the outcomes these types of activities yield. Challenge design should be informed by desired outcomes that align with NASA’s mission. This paper provides several case studies of NASA open innovation activities and maps the outcomes of those activities to a successful set of outcomes that challenges can help drive alongside traditional tools such as contracts, grants and partnerships….(More)”

The Fundamentals of Policy Crowdsourcing


Article by John PrpićAraz Taeihagh and James Melton at Policy and Internet: “What is the state of the research on crowdsourcing for policymaking? This article begins to answer this question by collecting, categorizing, and situating an extensive body of the extant research investigating policy crowdsourcing, within a new framework built on fundamental typologies from each field. We first define seven universal characteristics of the three general crowdsourcing techniques (virtual labor markets, tournament crowdsourcing, open collaboration), to examine the relative trade-offs of each modality. We then compare these three types of crowdsourcing to the different stages of the policy cycle, in order to situate the literature spanning both domains. We finally discuss research trends in crowdsourcing for public policy and highlight the research gaps and overlaps in the literature….(More)”

Citizen Urban Science


New report by Anthony Townsend and Alissa Chisholm at the Cities of Data Project: “Over the coming decades, the world will continue to urbanize rapidly amidst an historic migration of computing power off the desktop, unleashing new opportunities for data collection that reveal how cities function. In a recent report, Making Sense of the Science of Cities (bit.ly/sciencecities) we described an emerging global research movement that seeks establish a new urban science built atop this new infrastructure of instruments. But will this new intellectual venture be an inclusive endeavor? What role is 1 there for the growing ranks of increasingly well-equipped and well-informed citizen volunteers and amateur investigators to work alongside professional scientists? How are researchers, activists and city governments exploring that potential today? Finally, what can be done to encourage and accelerate experimentation?

This report examines three case studies that provide insight into emerging models of citizen science, highlighting the possibilities of citizen-university-government collaborative research, and the important role of open data platforms to enable these partnerships….(More)”