What Is Citizen Science? – A Scientometric Meta-Analysis


Christopher Kullenberg and Dick Kasperowski at PLOS One: “The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health.

Objective

In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms.

Results

Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data….(More)”

When is your problem a ‘Challenge’?


Ed Parkes at NESTA: “More NGOs, Government Departments and city governments are using challenge prizes to help develop new products and services which ‘solve’ a problem they have identified. There have been several high profile prizes (for instance, Nesta’s Longitude Prize or the recently announced $7 million ocean floor Xprize) and a growing number of platforms for running them (such as Challenge.gov or OpenIdeo). Due to this increased profile, challenge prizes are more often seen by public sector strategists and policy owners as holding the potential to solve their tricky strategic issues.

To characterise, the starting point is often “If only we could somehow get new, smart, digitally-informed organisations to solve the underfunded, awkward strategic issues we’ve been grappling with, wouldn’t it be great?”.

This approach is especially tantalising for public sector organisations as it means they can be seen to take action on an issue through ‘market shaping’, rather than resorting to developing policy or intervening with regulation or legislation.

Having worked on a series of challenge prizes on open data over the last couple of years, as well as subsequently working with organisations on how our design principles could be applied to their objectives, I’ve spent some time thinking about when it’s appropriate to run a challenge prize. The design and practicalities of running a successful challenge prize are not always straightforward. Thankfully there has already been some useful broad guidance on this from Nesta’s Centre for Challenge Prizes in their Challenge Prize Practice Guide and McKinsey and Deloitte have also published guides.

Nevertheless despite this high quality guidance, like many things in life, the most difficult part is knowing where to start. Organisations struggle to understand whether they have the right problem in the first place. In many instances running a challenge prize is not the appropriate organisational response to an issue and it’s best to discover this early on. From my experience, there are two key questions which are worth asking when you’re trying to work out if your problem is suitable:

1. Is your problem an issue for anyone other than your own organisation?…

2. Will other people see solving this problem as an investment opportunity or worth funding?…

These two considerations come down to one thing – incentive. Firstly, does anyone other than your organisation care about this issue and secondly, do they care enough about it to pay to solve it…..(More)’

Campaigning in the Twenty-First Century


Updated book by Dennis W. Johnson: “In view of the 2016 US election season, the second edition of this book analyzes the way political campaigns have been traditionally run and the extraordinary changes that have occurred since 2012. Dennis W. Johnson looks at the most sophisticated techniques of modern campaigning—micro-targeting, online fundraising, digital communication, the new media—and examines what has changed, how those changes have dramatically transformed campaigning, and what has remained fundamentally the same despite new technologies and communications.

Campaigns are becoming more open and free-wheeling, with greater involvement of activists (especially through social media) and average voters alike. At the same time, they have become more professionalized, and the author has experience managing and marketing the process. Campaigning in the Twenty-First Century illustrates the daunting challenges for candidates and professional consultants as they try to get their messages out to voters. Ironically, the more open and robust campaigns become, the greater is the need for seasoned, flexible, and imaginative professional consultants… (More)”

How Measurement Fails Doctors and Teachers


Robert M. Wachter at the New York Times: “Two of our most vital industries, health care and education, have become increasingly subjected to metrics and measurements. Of course, we need to hold professionals accountable. But the focus on numbers has gone too far. We’re hitting the targets, but missing the point.

Through the 20th century, we adopted a hands-off approach, assuming that the pros knew best. Most experts believed that the ideal “products” — healthy patients and well-educated kids — were too strongly influenced by uncontrollable variables (the sickness of the patient, the intellectual capacity of the student) and were too complex to be judged by the measures we use for other industries.

By the early 2000s, as evidence mounted that both fields were producing mediocre outcomes at unsustainable costs, the pressure for measurement became irresistible. In health care, we saw hundreds of thousands of deaths from medical errors, poor coordination of care and backbreaking costs. In education, it became clear that our schools were lagging behind those in other countries.

So in came the consultants and out came the yardsticks. In health care, we applied metrics to outcomes and processes. Did the doctor document that she gave the patient a flu shot? That she counseled the patient about smoking? In education, of course, the preoccupation became student test scores.

All of this began innocently enough. But the measurement fad has spun out of control. There are so many different hospital ratings that more than 1,600 medical centers can now lay claim to being included on a “top 100,” “honor roll,” grade “A” or “best” hospitals list. Burnout rates for doctors top 50 percent, far higher than other professions. A 2013 study found that the electronic health record was a dominant culprit. Another 2013 study found that emergency room doctors clicked a mouse 4,000 times during a 10-hour shift. The computer systems have become the dark force behind quality measures.

Education is experiencing its own version of measurement fatigue. Educators complain that the focus on student test performance comes at the expense of learning. Art, music and physical education have withered, because, really,why bother if they’re not on the test?…

Thoughtful and limited assessment can be effective in motivating improvements and innovations, and in weeding out the rare but disproportionately destructive bad apples.

But in creating a measurement and accountability system, we need to tone down the fervor and think harder about the unanticipated consequences….(More)”

 

Distributed ledger technology: beyond block chain


UK Government Office for Science: “In a major report on distributed ledgers published today (19 January 2016), the Government Chief Scientist, Sir Mark Walport, sets out how this technology could transform the delivery of public services and boost productivity.

A distributed ledger is a database that can securely record financial, physical or electronic assets for sharing across a network through entirely transparent updates of information.

Its first incarnation was ‘Blockchain’ in 2008, which underpinned digital cash systems such as Bitcoin. The technology has now evolved into a variety of models that can be applied to different business problems and dramatically improve the sharing of information.

Distributed ledger technology could provide government with new tools to reduce fraud, error and the cost of paper intensive processes. It also has the potential to provide new ways of assuring ownership and provenance for goods and intellectual property.

Distributed ledgers are already being used in the diamond markets and in the disbursing of international aid payments.

Sir Mark Walport said:

Distributed ledger technology has the potential to transform the delivery of public and private services. It has the potential to redefine the relationship between government and the citizen in terms of data sharing, transparency and trust and make a leading contribution to the government’s digital transformation plan.

Any new technology creates challenges, but with the right mix of leadership, collaboration and sound governance, distributed ledgers could yield significant benefits for the UK.

The report makes a number of recommendations which focus on ministerial leadership, research, standards and the need for proof of concept trials.

They include:

  • government should provide ministerial leadership to ensure that it provides the vision, leadership and the platform for distributed ledger technology within government; this group should consider governance, privacy, security and standards
  • government should establish trials of distributed ledgers in order to assess the technology’s usability within the public sector
  • government could support the creation of distributed ledger demonstrators for local government that will bring together all the elements necessary to test the technology and its application.
  • the UK research community should invest in the research required to ensure that distributed ledgers are scalable, secure and provide proof of correctness of their contents….View the report ‘Distributed ledger technology: beyond block chain’.”

The impact of open access scientific knowledge


Jack Karsten and Darrell M. West at Brookings: “In spite of technological advancements like the Internet, academic publishing has operated in much the same way for centuries. Scientists voluntarily review their peers’ papers for little or no compensation; the paper’s author likewise does not receive payment from academic publishers. Though most of the costs of publishing a journal are administrative, the cost of subscribing to scientific journals nevertheless increased 600 percent between 1984 and 2002. The funding for the research libraries that form the bulk of journal subscribers has not kept pace, leading to campaigns at universities including Harvard to boycott for-profit publishers.

Though the Internet has not yet brought down the price of academic journal subscriptions, it has led to some interesting alternatives. In 2015, the Twitter hashtag #icanhazPDF was created to request copies of papers located behind paywalls. Anyone with access to a specific paper can download it and then e-mail it to the requester. The practice violates the copyright of publishers, but puts papers in reach of researchers who would otherwise not be able to read them. If a researcher cannot read a journal article in the first place, they cannot go on to cite it, which raises the profile of the cited article and the journal that published it. The publisher is caught between two conflicting goals: to increase the number of citations for their articles and earning revenue to stay in business.

Thinking outside the journal

A trio of University of Chicago researchers examines this issue through the lens of Wikipedia in a paper titled “Amplifying the Impact of Open Access: Wikipedia and the Diffusion of Science.” Wikipedia makes a compelling subject for scientific diffusion given its status as one of the most visited websites in the world, attracting 374 million unique visitors monthly as of September 2015. The study found that on English language articles, Wikipedia editors are 47 percent more likely to cite an article from an open access journal. Anyone using Wikipedia as a first source for information on a subject is more likely to read information from open source journals. If readers click through the links to cited articles, they can read the actual text of these open-source journal articles.

Given how much the federal government spends on scientific research ($66 billion on nondefense R&D in 2015), it has a large role to play in the diffusion of scientific knowledge. Since 2008, the National Institutes of Health (NIH) has required researchers who publish in academic journals to also publish in PubMed, an online open access journal. Expanding provisions like the NIH Public Access Policy to other agencies and to recipients of federal grants at universities would give the public and other researchers a wealth of scientific information. Scientific literacy, even on cutting-edge research, is increasingly important when science informs policy on major issues such as climate change and health care….(More)”

Systematic Thinking for Social Action


Re-issued book by Alice M. Rivlin: “In January 1970 Alice M. Rivlin spoke to an audience at the University of California–Berkeley. The topic was developing a more rational approach to decision-making in government. If digital video, YouTube, and TED Talks had been inventions of the 1960s, Rivlin’s talk would have been a viral hit. As it was, the resulting book, Systematic Thinking for Social Action, spent years on the Brookings Press bestseller list. It is a very personal and conversational volume about the dawn of new ways of thinking about government.

As a deputy assistant secretary for program coordination, and later as assistant secretary for planning and evaluation, at the Department of Health, Education and Welfare from 1966 to 1969, Rivlin was an early advocate of systems analysis, which had been introduced by  Robert McNamara at the Department of Defense as  PPBS (planning-programming-budgeting-system).

While Rivlin brushes aside the jargon, she digs into the substance of systematic analysis and a “quiet revolution in government.” In an evaluation of the evaluators, she issues mixed grades, pointing out where analysts had been helpful in finding solutions and where—because of inadequate data or methods—they had been no help at all.

Systematic Thinking for Social Action offers important insights for anyone interested in working to find the smartest ways to allocate scarce funds to promote the maximum well-being of all citizens.

This reissue is a Brookings Classics, a series of republished books for readers to revisit or discover previous, notable works by the Brookings Institution Press.

Chicago Is Predicting Food Safety Violations. Why Aren’t Other Cities?


Julian Spector at CityLab: “The three dozen inspectors at the Chicago Department of Public Health scrutinize 16,000 eating establishments to protect diners from gut-bombing food sickness. Some of those pose more of a health risk than others; approximately 15 percent of inspections catch a critical violation.

For years, Chicago, like most every city in the U.S., scheduled these inspections by going down the complete list of food vendors and making sure they all had a visit in the mandated timeframe. That process ensured that everyone got inspected, but not that the most likely health code violators got inspected first. And speed matters in this case. Every day that unsanitary vendors serve food is a new chance for diners to get violently ill, paying in time, pain, and medical expenses.

That’s why, in 2014, Chicago’s Department of Innovation and Technology started sifting through publicly available city data and built an algorithm to predict which restaurants were most likely to be in violation of health codes, based on the characteristics of previously recorded violations. The program generated a ranked list of which establishments the inspectors should look at first. The project is notable not just because it worked—the algorithm identified violations significantly earlier than business as usual did—but because the team made it as easy as possible for other cities to replicate the approach.

And yet, more than a year after Chicago published its code, only one local government, in metro D.C., has tried to do the same thing. All cities face the challenge of keeping their food safe and therefore have much to gain from this data program. The challenge, then, isn’t just to design data solutions that work, but to do so in a way that facilitates sharing them with other cities. The Chicago example reveals the obstacles that might prevent a good urban solution from spreading to other cities, but also how to overcome them….(More)”

Met Office warns of big data floods on the horizon


 at V3: “The amount of data being collected by departments and agencies mean government services will not be able to implement truly open data strategies, according to Met Office CIO Charles Ewen.

Ewen said the rapidly increasing amount of data being stored by companies and government departments mean it will not be technologically possible able to share all their data in the near future.

During a talk at the Cloud World Forum on Wednesday, he said: “The future will be bigger and bigger data. Right now we’re talking about petabytes, in the near future it will be tens of petabytes, then soon after it’ll be hundreds of petabytes and then we’ll be off into imaginary figure titles.

“We see a future where data has gotten so big the notion of open data and the idea ‘lets share our data with everybody and anybody’ just won’t work. We’re struggling to make it work already and by 2020 the national infrastructure will not exist to shift this stuff [data] around in the way anybody could access and make use of it.”

Ewen added that to deal with the shift he expects many departments and agencies will adapt their processes to become digital curators that are more selective about the data they share, to try and ensure it is useful.

“This isn’t us wrapping our arms around our data and saying you can’t see it. We just don’t see how we can share all this big data in the way you would want it,” he said.

“We see a future where a select number of high-capacity nodes become information brokers and are used to curate and manage data. These curators will be where people bring their problems. That’s the future we see.”

Ewan added that the current expectations around open data are based on misguided views about the capabilities of cloud technology to host and provide access to huge amounts of data.

“The trendy stuff out there claims to be great at everything, but don’t get carried away. We don’t see cloud as anything but capability. We’ve been using appropriate IT and what’s available to deliver our mission services for over 50 to 60 years, and cloud is playing an increasing part of that, but purely for increased capability,” he said.

“It’s just another tool. The important thing is having the skill and knowledge to not just believe vendors but to look and identify the problem and say ‘we have to solve this’.”

The Met Office CIO’s comments follow reports from other government service providers that people’s desire for open data is growing exponentially….(More)”

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed


Paper by Ashley N.D Meyer et al in the Journal of Medical Internet Research: ” Background: Despite visits to multiple physicians, many patients remain undiagnosed. A new online program, CrowdMed, aims to leverage the “wisdom of the crowd” by giving patients an opportunity to submit their cases and interact with case solvers to obtain diagnostic possibilities.

Objective: To describe CrowdMed and provide an independent assessment of its impact.

Methods: Patients submit their cases online to CrowdMed and case solvers sign up to help diagnose patients. Case solvers attempt to solve patients’ diagnostic dilemmas and often have an interactive online discussion with patients, including an exchange of additional diagnostic details. At the end, patients receive detailed reports containing diagnostic suggestions to discuss with their physicians and fill out surveys about their outcomes. We independently analyzed data collected from cases between May 2013 and April 2015 to determine patient and case solver characteristics and case outcomes.

Results: During the study period, 397 cases were completed. These patients previously visited a median of 5 physicians, incurred a median of US $10,000 in medical expenses, spent a median of 50 hours researching their illnesses online, and had symptoms for a median of 2.6 years. During this period, 357 active case solvers participated, of which 37.9% (132/348) were male and 58.3% (208/357) worked or studied in the medical industry. About half (50.9%, 202/397) of patients were likely to recommend CrowdMed to a friend, 59.6% (233/391) reported that the process gave insights that led them closer to the correct diagnoses, 57% (52/92) reported estimated decreases in medical expenses, and 38% (29/77) reported estimated improvement in school or work productivity.

Conclusions: Some patients with undiagnosed illnesses reported receiving helpful guidance from crowdsourcing their diagnoses during their difficult diagnostic journeys. However, further development and use of crowdsourcing methods to facilitate diagnosis requires long-term evaluation as well as validation to account for patients’ ultimate correct diagnoses….(More)”