Craft metrics to value co-production


Liz Richardson and Beth Perry at Nature: “Advocates of co-production encourage collaboration between professional researchers and those affected by that research, to ensure that the resulting science is relevant and useful. Opening up science beyond scientists is essential, particularly where problems are complex, solutions are uncertain and values are salient. For example, patients should have input into research on their conditions, and first-hand experience of local residents should shape research on environmental-health issues.

But what constitutes success on these terms? Without a better understanding of this, it is harder to incentivize co-production in research. A key way to support co-production is reconfiguring that much-derided feature of academic careers: metrics.

Current indicators of research output (such as paper counts or the h-index) conceptualize the value of research narrowly. They are already roundly criticized as poor measures of quality or usefulness. Less appreciated is the fact that these metrics also leave out the societal relevance of research and omit diverse approaches to creating knowledge about social problems.

Peer review also has trouble assessing the value of research that sits at disciplinary boundaries or that addresses complex social challenges. It denies broader social accountability by giving scientists a monopoly on determining what is legitimate knowledge1. Relying on academic peer review as a means of valuing research can discourage broader engagement.

This privileges abstract and theoretical research over work that is localized and applied. For example, research on climate-change adaptation, conducted in the global south by researchers embedded in affected communities, can make real differences to people’s lives. Yet it is likely to be valued less highly by conventional evaluation than research that is generalized from afar and then published in a high-impact English-language journal….(More)”.

Managing the Consumer Data Deluge


Joe Marion at Healthcare Informatics: “By now, everyone is aware of Apple’s recent announcement of an ECG capability on its latest watch.  It joins an expanding list of portable or in-home devices for monitoring cardiac and other functions.  The Apple device takes advantage of an established ECG device from AliveCor, which had previously introduced the CardiaMobile ECG capture device for Android and iOS devices.  More sophisticated monitoring devices such as implantable devices can monitor heart function in heart failure cases.

Many facilities are implementing video-conferencing type capabilities for patient consultation for non-life-threatening issues.  These might include the capture of information such as a “selfie” of a rash that is uploaded to the physician for assessment.

Given that many of these devices are designed to collect diagnostic data outside of the primary care facility, there is a growing tsunami in terms of the amount of diagnostic data that will need to be managed.  Since most of this data is created outside of the primary care facility, there are several questions that need to be addressed.

Who owns the data? Let’s take the case of ECG data captured from an Apple or AliveCor device.  The data is being acquired by the user, and it is initially stored on the watch or phone device, and may utilize some initial diagnosis application.  The whole purpose of capturing this data is to monitor cardiac function, particularly fibrillation, and to share it with a medical professional.  In the case of these devices the data can be optionally uploaded to an AliveCor cloud application for storage.  Thus, the assumption would be that the patient is the “owner” of the data.  But what if it is necessary to transmit this data to a professional such as a cardiologist?  Is it then the responsibility of the receiving entity to store and manage the data?  Or, is it assumed that the patient is responsible for maintaining the data?

Once data is brought into a provider organization for diagnostic purposes, it seems reasonable that the facility would be responsible for maintaining that data, just as they do today for radiographic studies that are taken.  If, for example a cardiologist dictates a report on ECG results, the results most likely end up in the EHR, but what becomes of the diagnostic data?

Who is responsible for maintaining the data? As stated above, if the acquired data results in a report of some type, the report most likely becomes the legal document in the EHR, but for legal purposes, many facilities feel the need to store the original diagnostic data for some period of time….

Who is the originator of the data collection? The informed patient may wish to acquire diagnostic data, such as ECG or blood pressure information, but are they prepared to manage that data?  If they are concerned about episodes of atrial fibrillation, then there may be an incentive to acquire and manage such data.

Conversely, many in-home devices are initiated by care providers, such as remote monitoring of heart failure.  In these instances, the acquisition devices are most likely provided to the patient for the physician’s benefit.  Therefore, the onus is on the provider to manage the acquired data, and it would become the facility’s responsibility for managing data storage.

Another question is how valuable is such data to the management of the patient?  For devices such as the Apple watch ECG capability, is it important to the physician to have access to that data over the long term?…(More)”.

What is the true value of data? New series on the return on investment of data interventions


Case studies prepared by Jessica Espey and Hayden Dahmm for  SDSN TReNDS: “But what is the ROI of investing in data for altruistic means–e.g., for sustainable development?

Today, we are launching a series of case studies to answer this question in collaboration with the Global Partnership on Sustainable Development Data. The ten examples we will profile range from earth observation data gathered via satellites to investments in national statistics systems, with costs from just a few hundred thousand dollars (US) per year to millions over decades.

The series includes efforts to revamp existing statistical systems. It also supports the growing movement to invest in less traditional approaches to data collection and analysis beyond statistical systems–such as through private sector data sources or emerging technologies enabled by the growth of the information and communications technology (ICT) sector.

Some highlights from the first five case studies–available now:

An SMS-based system called mTRAC, implemented in Uganda, has supported significant improvements in the country’s health system–including halving of response time to disease outbreaks and reducing medication stock-outs, the latter of which resulted in fewer malaria-related deaths.

NASA’s and the U.S. Geological Survey’s Landsat program–satellites that provide imagery known as earth observation data–is enabling discoveries and interventions across the science and health sectors, and provided an estimated worldwide economic benefit as high as US$2.19 billion as of 2011.

BudgIT, a civil society organization making budget data in Nigeria more accessible to citizens through machine-readable PDFs and complementary online/offline campaigns, is empowering citizens to partake in the federal budget process.

International nonprofit BRAC is ensuring mothers and infants in the slums of Bangladesh are not left behind through a data-informed intervention combining social mapping, local censuses, and real-time data sharing. BRAC estimates that from 2008 to 2017, 1,087 maternal deaths were averted out of the 2,476 deaths that would have been expected based on national statistics.

Atlantic City police are developing new approaches to their patrolling, community engagement, and other activities through risk modeling based on crime and other data, resulting in reductions in homicides and shooting injuries (26 percent) and robberies (37 percent) in just the first year of implementation….(More)”.

What Can Satellite Imagery Tell Us About Obesity in Cities?


Emily Matchar at Smithsonian: “About 40 percent of American adults are obese, defined as having a body mass index (BMI) over 30. But obesity is not evenly distributed around the country. Some cities and states have far more obese residents than others. Why? Genetics, stress, income levels and access to healthy foods are play a role. But increasingly researchers are looking at the built environment—our cities—to understand why people are fatter in some places than in others.

New research from the University of Washington attempts to take this approach one step further by using satellite data to examine cityscapes. By using the satellite images in conjunction with obesity data, they hope to uncover which urban features might influence a city’s obesity rate.

The researchers used a deep learning network to analyze about 150,000 high-resolution satellite image of four cities: Los Angeles, Memphis, San Antonio and Seattle. The cities were selected for being from states with both high obesity rates (Texas and Tennessee) and low obesity rates (California and Washington). The network extracted features of the built environment: crosswalks, parks, gyms, bus stops, fast food restaurants—anything that might be relevant to health.

“If there’s no sidewalk you’re less likely to go out walking,” says Elaine Nsoesie, a professor of global health at the University of Washington who led the research.

The team’s algorithm could then see what features were more or less common in areas with greater and lesser rates of obesity. Some findings were predictable: more parks, gyms and green spaces were correlated with lower obesity rates. Others were surprising: more pet stores equaled thinner residents (“a high density of pet stores could indicate high pet ownership, which could influence how often people go to parks and take walks around the neighborhood,” the team hypothesized).

A paper on the results was recently published in the journal JAMA Network Open….(More)”.

The rush for data risks growing the North-South divide


Laura Mann and Gianluca Lazzolino at SciDevNet: “Across the world, tech firms and software developers are embedding digital platforms into humanitarian and commercial infrastructures. There’s Jembi and Hello Doctor for the healthcare sector, for example; SASSA and Tamween for social policy; and M-farmi-CowEsoko among many others for agriculture.

While such systems proliferate, it is time we asked some tough questions about who is controlling this data, and for whose benefit. There is a danger that ‘platformisation’ widens the knowledge gap between firms and scientists in poorer countries and those in more advanced economies.

Digital platforms serve three purposes. They improve interactions between service providers and users; gather transactional data about those users; and nudge them towards behaviours, activities and products considered ‘virtuous’, profitable, or valued — often because they generate more data. This data  can be extremely valuable to policy-makers interested in developing interventions, to researchers exploring socio-economic trends and to businesses seeking new markets.

But the development and use of these platforms are not always benign.

Knowledge and power

Digital technologies are knowledge technologies because they record the personal information, assets, behaviour and networks of the people that use them.

Knowledge has a somewhat gentle image of a global good shared openly and evenly across the world. But in reality, it is competitive.
Simply put, knowledge shapes economic rivalry between rich and poor countries. It influences who has power over the rules of the economic game, and it does this in three key ways.

First, firms can use knowledge and technology to become more efficient and competitive in what they do. For example, a farmer can choose to buy technologically enhanced seeds, inputs such as fertilisers, and tools to process their crop.

This technology transfer is not automatic — the farmer must first invest time to learn how to use these tools.  In this sense, economic competition between nations is partly about how well-equipped their people are in using technology effectively.

The second key way in which knowledge impacts global economic competition depends on looking at development as a shift from cut-throat commodity production towards activities that bring higher profits and wages.

In farming, for example, development means moving out of crop production alone into a position of having more control over agricultural inputs, and more involvement in distributing or marketing agricultural goods and services….(More)”.

Walmart wants to track lettuce on the blockchain


Matthew Beedham at TNW: “Walmart is asking all of its leafy greens suppliers to get on blockchain by this time next year.

With instances of E. coli on the rise, particularly in romaine lettuce, Walmart is insisting that its suppliers use blockchain to track and trace products from source to the customer.

Walmart notes that, while health officials at the Centers for Disease Control told Americans have already warned citizens to avoid eating lettuce grown in Yuma, Arizona, it’s near impossible for consumers to know where their greens are coming from.

On one hand this could be a great system for reducing waste. Earlier this year, green grocers had to throw away produce thought to be infected with E. Coli.

The announcement states, “[h]ealth officials at the Centers for Disease Control told Americans to avoid eating lettuce that was grown in Yuma, Arizona”

However, it’s near impossible for consumers to know where their lettuce was grown.

It would seem that most producers and suppliers still rely on paper-based ledgers. As a result, tracking down vital information about where a product came from can be very time consuming.

By which time, it might be too late and many customers might have purchased and consumed infected produce.

If Walmart’s plans come to fruition, it would allow customers to view the entire supply chain of a product at the point of purchase… (More)”

Making Wage Data Work: Creating a Federal Resource for Evidence and Transparency


Christina Pena at the National Skills Coalition: “Administrative data on employment and earnings, commonly referred to as wage data or wage records, can be used to assess the labor market outcomes of workforce, education, and other programs, providing policymakers, administrators, researchers, and the public with valuable information. However, there is no single readily accessible federal source of wage data which covers all workers. Noting the importance of employment and earnings data to decision makers, the Commission on Evidence-Based Policymaking called for the creation of a single federal source of wage data for statistical purposes and evaluation. They recommended three options for further exploration: expanding access to systems that already exist at the U.S. Census Bureau or the U.S. Department of Health and Human Services (HHS), or creating a new database at the U.S. Department of Labor (DOL).

This paper reviews current coverage and allowable uses, as well as federal and state actions required to make each option viable as a single federal source of wage data that can be accessed by government agencies and authorized researchers. Congress and the President, in conjunction with relevant federal and state agencies, should develop one or more of those options to improve wage information for multiple purposes. Although not assessed in the following review, financial as well as privacy and security considerations would influence the viability of each scenario. Moreover, if a system like the Commission-recommended National Secure Data Service for sharing data between agencies comes to fruition, then a wage system might require additional changes to work with the new service….(More)”

Computers Can Solve Your Problem. You May Not Like The Answer


David Scharfenberg at the Boston Globe: “Years of research have shown that teenagers need their sleep. Yet high schools often start very early in the morning. Starting them later in Boston would require tinkering with elementary and middle school schedules, too — a Gordian knot of logistics, pulled tight by the weight of inertia, that proved impossible to untangle.

Until the computers came along.

Last year, the Boston Public Schools asked MIT graduate students Sébastien Martin and Arthur Delarue to build an algorithm that could do the enormously complicated work of changing start times at dozens of schools — and rerouting the hundreds of buses that serve them….

The algorithm was poised to put Boston on the leading edge of a digital transformation of government. In New York, officials were using a regression analysis tool to focus fire inspections on the most vulnerable buildings. And in Allegheny County, Pa., computers were churning through thousands of health, welfare, and criminal justice records to help identify children at risk of abuse….

While elected officials tend to legislate by anecdote and oversimplify the choices that voters face, algorithms can chew through huge amounts of complicated information. The hope is that they’ll offer solutions we’ve never imagined ­— much as Google Maps, when you’re stuck in traffic, puts you on an alternate route, down streets you’ve never traveled.

Dataphiles say algorithms may even allow us to filter out the human biases that run through our criminal justice, social service, and education systems. And the MIT algorithm offered a small window into that possibility. The data showed that schools in whiter, better-off sections of Boston were more likely to have the school start times that parents prize most — between 8 and 9 a.m. The mere act of redistributing start times, if aimed at solving the sleep deprivation problem and saving money, could bring some racial equity to the system, too.

Or, the whole thing could turn into a political disaster.

District officials expected some pushback when they released the new school schedule on a Thursday night in December, with plans to implement in the fall of 2018. After all, they’d be messing with the schedules of families all over the city.

But no one anticipated the crush of opposition that followed. Angry parents signed an online petition and filled the school committee chamber, turning the plan into one of the biggest crises of Mayor Marty Walsh’s tenure. The city summarily dropped it. The failure would eventually play a role in the superintendent’s resignation.

It was a sobering moment for a public sector increasingly turning to computer scientists for help in solving nagging policy problems. What had gone wrong? Was it a problem with the machine? Or was it a problem with the people — both the bureaucrats charged with introducing the algorithm to the public, and the public itself?…(More)”

Google, T-Mobile Tackle 911 Call Problem


Sarah Krouse at the Wall Street Journal: “Emergency call operators will soon have an easier time pinpointing the whereabouts of Android phone users.

Google has struck a deal with T-Mobile US to pipe location data from cellphones with Android operating systems in the U.S. to emergency call centers, said Fiona Lee, who works on global partnerships for Android emergency location services.

The move is a sign that smartphone operating system providers and carriers are taking steps to improve the quality of location data they send when customers call 911. Locating callers has become a growing problem for 911 operators as cellphone usage has proliferated. Wireless devices now make 80% or more of the 911 calls placed in some parts of the U.S., according to the trade group National Emergency Number Association. There are roughly 240 million calls made to 911 annually.

While landlines deliver an exact address, cellphones typically register only an estimated location provided by wireless carriers that can be as wide as a few hundred yards and imprecise indoors.

That has meant that while many popular applications like Uber can pinpoint users, 911 call takers can’t always do so. Technology giants such as Google and Apple Inc. that run phone operating systems need a direct link to the technology used within emergency call centers to transmit precise location data….

Google currently offers emergency location services in 14 countries around the world by partnering with carriers and companies that are part of local emergency communications infrastructure. Its location data is based on a combination of inputs from Wi-Fi to sensors, GPS and a mobile network information.

Jim Lake, director at the Charleston County Consolidated 9-1-1 Center, participated in a pilot of Google’s emergency location services and said it made it easier to find people who didn’t know their location, particularly because the area draws tourists.

“On a day-to-day basis, most people know where they are, but when they don’t, usually those are the most horrifying calls and we need to know right away,” Mr. Lake said.

In June, Apple said it had partnered with RapidSOS to send iPhone users’ location information to 911 call centers….(More)”

We hold people with power to account. Why not algorithms?


Hannah Fry at the Guardian: “…But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong…

I think it’s time we started treating machines as we would any other source of power. I would like to propose a system of regulation for algorithms, and perhaps a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI:

“What power have you got?
“Where did you get it from?
“In whose interests do you use it?
“To whom are you accountable?
“How do we get rid of you?”
Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.